Test Report: KVM_Linux_crio 19689

                    
                      af422e057ba227eec8656c67d09f56de251f325e:2024-09-23:36336
                    
                

Test fail (14/275)

x
+
TestAddons/parallel/Registry (74.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.058818ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7z2xv" [71f47a69-a374-4586-8d8b-0ec84aeee203] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003302599s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kwn7c" [fab26ceb-8538-4146-9f14-955f715b3dd7] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003797499s
addons_test.go:338: (dbg) Run:  kubectl --context addons-230451 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-230451 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-230451 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.078819071s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-230451 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 ip
2024/09/23 10:33:29 [DEBUG] GET http://192.168.39.142:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-230451 -n addons-230451
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 logs -n 25: (1.452333258s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-944972 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-944972                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-944972                                                                     | download-only-944972 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | -o=json --download-only                                                                     | download-only-056027 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-056027                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-056027                                                                     | download-only-056027 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-944972                                                                     | download-only-944972 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-056027                                                                     | download-only-056027 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-004546 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-004546                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34819                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-004546                                                                     | binary-mirror-004546 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-230451 --wait=true                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | -p addons-230451                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | -p addons-230451                                                                            |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-230451 ssh cat                                                                       | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | /opt/local-path-provisioner/pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC |                     |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| ip      | addons-230451 ip                                                                            | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:54.509930   11896 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:54.510176   11896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:54.510185   11896 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:54.510189   11896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:54.510371   11896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:21:54.510927   11896 out.go:352] Setting JSON to false
	I0923 10:21:54.511749   11896 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":257,"bootTime":1727086657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:54.511839   11896 start.go:139] virtualization: kvm guest
	I0923 10:21:54.513820   11896 out.go:177] * [addons-230451] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:21:54.515097   11896 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:21:54.515105   11896 notify.go:220] Checking for updates...
	I0923 10:21:54.517574   11896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:54.518845   11896 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:21:54.519947   11896 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:54.520978   11896 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:21:54.521954   11896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:21:54.523196   11896 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:54.554453   11896 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 10:21:54.555559   11896 start.go:297] selected driver: kvm2
	I0923 10:21:54.555580   11896 start.go:901] validating driver "kvm2" against <nil>
	I0923 10:21:54.555601   11896 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:21:54.556616   11896 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:54.556711   11896 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 10:21:54.571291   11896 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 10:21:54.571371   11896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:54.571718   11896 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:54.571756   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:21:54.571824   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:21:54.571833   11896 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:54.571901   11896 start.go:340] cluster config:
	{Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:54.572023   11896 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:54.574799   11896 out.go:177] * Starting "addons-230451" primary control-plane node in "addons-230451" cluster
	I0923 10:21:54.575781   11896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:54.575828   11896 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:54.575840   11896 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:54.575908   11896 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:21:54.575919   11896 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:21:54.576245   11896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json ...
	I0923 10:21:54.576269   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json: {Name:mke557599469685c702152c654faebe5e1d076a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:54.576419   11896 start.go:360] acquireMachinesLock for addons-230451: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:21:54.576485   11896 start.go:364] duration metric: took 50.98µs to acquireMachinesLock for "addons-230451"
	I0923 10:21:54.576507   11896 start.go:93] Provisioning new machine with config: &{Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:21:54.576577   11896 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 10:21:54.577964   11896 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 10:21:54.578088   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:21:54.578137   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:21:54.592162   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0923 10:21:54.592680   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:21:54.593173   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:21:54.593196   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:21:54.593565   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:21:54.593723   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:21:54.593874   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:21:54.593988   11896 start.go:159] libmachine.API.Create for "addons-230451" (driver="kvm2")
	I0923 10:21:54.594024   11896 client.go:168] LocalClient.Create starting
	I0923 10:21:54.594063   11896 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:21:54.862234   11896 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:21:54.952456   11896 main.go:141] libmachine: Running pre-create checks...
	I0923 10:21:54.952476   11896 main.go:141] libmachine: (addons-230451) Calling .PreCreateCheck
	I0923 10:21:54.952976   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:21:54.953437   11896 main.go:141] libmachine: Creating machine...
	I0923 10:21:54.953450   11896 main.go:141] libmachine: (addons-230451) Calling .Create
	I0923 10:21:54.953678   11896 main.go:141] libmachine: (addons-230451) Creating KVM machine...
	I0923 10:21:54.954811   11896 main.go:141] libmachine: (addons-230451) DBG | found existing default KVM network
	I0923 10:21:54.955692   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:54.955529   11918 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0923 10:21:54.955752   11896 main.go:141] libmachine: (addons-230451) DBG | created network xml: 
	I0923 10:21:54.955775   11896 main.go:141] libmachine: (addons-230451) DBG | <network>
	I0923 10:21:54.955786   11896 main.go:141] libmachine: (addons-230451) DBG |   <name>mk-addons-230451</name>
	I0923 10:21:54.955801   11896 main.go:141] libmachine: (addons-230451) DBG |   <dns enable='no'/>
	I0923 10:21:54.955811   11896 main.go:141] libmachine: (addons-230451) DBG |   
	I0923 10:21:54.955821   11896 main.go:141] libmachine: (addons-230451) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 10:21:54.955831   11896 main.go:141] libmachine: (addons-230451) DBG |     <dhcp>
	I0923 10:21:54.955840   11896 main.go:141] libmachine: (addons-230451) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 10:21:54.955852   11896 main.go:141] libmachine: (addons-230451) DBG |     </dhcp>
	I0923 10:21:54.955859   11896 main.go:141] libmachine: (addons-230451) DBG |   </ip>
	I0923 10:21:54.955868   11896 main.go:141] libmachine: (addons-230451) DBG |   
	I0923 10:21:54.955876   11896 main.go:141] libmachine: (addons-230451) DBG | </network>
	I0923 10:21:54.955886   11896 main.go:141] libmachine: (addons-230451) DBG | 
	I0923 10:21:54.961052   11896 main.go:141] libmachine: (addons-230451) DBG | trying to create private KVM network mk-addons-230451 192.168.39.0/24...
	I0923 10:21:55.025203   11896 main.go:141] libmachine: (addons-230451) DBG | private KVM network mk-addons-230451 192.168.39.0/24 created
	I0923 10:21:55.025234   11896 main.go:141] libmachine: (addons-230451) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 ...
	I0923 10:21:55.025245   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.025189   11918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:55.025262   11896 main.go:141] libmachine: (addons-230451) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:21:55.025326   11896 main.go:141] libmachine: (addons-230451) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:21:55.288584   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.288456   11918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa...
	I0923 10:21:55.387986   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.387858   11918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/addons-230451.rawdisk...
	I0923 10:21:55.388016   11896 main.go:141] libmachine: (addons-230451) DBG | Writing magic tar header
	I0923 10:21:55.388026   11896 main.go:141] libmachine: (addons-230451) DBG | Writing SSH key tar header
	I0923 10:21:55.388034   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.387970   11918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 ...
	I0923 10:21:55.388050   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451
	I0923 10:21:55.388086   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 (perms=drwx------)
	I0923 10:21:55.388098   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:21:55.388113   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:21:55.388129   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:21:55.388139   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:21:55.388148   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:55.388154   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:21:55.388171   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:21:55.388180   11896 main.go:141] libmachine: (addons-230451) Creating domain...
	I0923 10:21:55.388192   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:21:55.388205   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:21:55.388216   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:21:55.388227   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home
	I0923 10:21:55.388234   11896 main.go:141] libmachine: (addons-230451) DBG | Skipping /home - not owner
	I0923 10:21:55.389182   11896 main.go:141] libmachine: (addons-230451) define libvirt domain using xml: 
	I0923 10:21:55.389204   11896 main.go:141] libmachine: (addons-230451) <domain type='kvm'>
	I0923 10:21:55.389213   11896 main.go:141] libmachine: (addons-230451)   <name>addons-230451</name>
	I0923 10:21:55.389220   11896 main.go:141] libmachine: (addons-230451)   <memory unit='MiB'>4000</memory>
	I0923 10:21:55.389228   11896 main.go:141] libmachine: (addons-230451)   <vcpu>2</vcpu>
	I0923 10:21:55.389238   11896 main.go:141] libmachine: (addons-230451)   <features>
	I0923 10:21:55.389248   11896 main.go:141] libmachine: (addons-230451)     <acpi/>
	I0923 10:21:55.389257   11896 main.go:141] libmachine: (addons-230451)     <apic/>
	I0923 10:21:55.389264   11896 main.go:141] libmachine: (addons-230451)     <pae/>
	I0923 10:21:55.389273   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389291   11896 main.go:141] libmachine: (addons-230451)   </features>
	I0923 10:21:55.389303   11896 main.go:141] libmachine: (addons-230451)   <cpu mode='host-passthrough'>
	I0923 10:21:55.389308   11896 main.go:141] libmachine: (addons-230451)   
	I0923 10:21:55.389313   11896 main.go:141] libmachine: (addons-230451)   </cpu>
	I0923 10:21:55.389318   11896 main.go:141] libmachine: (addons-230451)   <os>
	I0923 10:21:55.389337   11896 main.go:141] libmachine: (addons-230451)     <type>hvm</type>
	I0923 10:21:55.389348   11896 main.go:141] libmachine: (addons-230451)     <boot dev='cdrom'/>
	I0923 10:21:55.389352   11896 main.go:141] libmachine: (addons-230451)     <boot dev='hd'/>
	I0923 10:21:55.389359   11896 main.go:141] libmachine: (addons-230451)     <bootmenu enable='no'/>
	I0923 10:21:55.389363   11896 main.go:141] libmachine: (addons-230451)   </os>
	I0923 10:21:55.389464   11896 main.go:141] libmachine: (addons-230451)   <devices>
	I0923 10:21:55.389496   11896 main.go:141] libmachine: (addons-230451)     <disk type='file' device='cdrom'>
	I0923 10:21:55.389515   11896 main.go:141] libmachine: (addons-230451)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/boot2docker.iso'/>
	I0923 10:21:55.389532   11896 main.go:141] libmachine: (addons-230451)       <target dev='hdc' bus='scsi'/>
	I0923 10:21:55.389544   11896 main.go:141] libmachine: (addons-230451)       <readonly/>
	I0923 10:21:55.389553   11896 main.go:141] libmachine: (addons-230451)     </disk>
	I0923 10:21:55.389565   11896 main.go:141] libmachine: (addons-230451)     <disk type='file' device='disk'>
	I0923 10:21:55.389576   11896 main.go:141] libmachine: (addons-230451)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:21:55.389584   11896 main.go:141] libmachine: (addons-230451)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/addons-230451.rawdisk'/>
	I0923 10:21:55.389594   11896 main.go:141] libmachine: (addons-230451)       <target dev='hda' bus='virtio'/>
	I0923 10:21:55.389602   11896 main.go:141] libmachine: (addons-230451)     </disk>
	I0923 10:21:55.389616   11896 main.go:141] libmachine: (addons-230451)     <interface type='network'>
	I0923 10:21:55.389629   11896 main.go:141] libmachine: (addons-230451)       <source network='mk-addons-230451'/>
	I0923 10:21:55.389639   11896 main.go:141] libmachine: (addons-230451)       <model type='virtio'/>
	I0923 10:21:55.389648   11896 main.go:141] libmachine: (addons-230451)     </interface>
	I0923 10:21:55.389658   11896 main.go:141] libmachine: (addons-230451)     <interface type='network'>
	I0923 10:21:55.389669   11896 main.go:141] libmachine: (addons-230451)       <source network='default'/>
	I0923 10:21:55.389678   11896 main.go:141] libmachine: (addons-230451)       <model type='virtio'/>
	I0923 10:21:55.389684   11896 main.go:141] libmachine: (addons-230451)     </interface>
	I0923 10:21:55.389696   11896 main.go:141] libmachine: (addons-230451)     <serial type='pty'>
	I0923 10:21:55.389707   11896 main.go:141] libmachine: (addons-230451)       <target port='0'/>
	I0923 10:21:55.389716   11896 main.go:141] libmachine: (addons-230451)     </serial>
	I0923 10:21:55.389725   11896 main.go:141] libmachine: (addons-230451)     <console type='pty'>
	I0923 10:21:55.389735   11896 main.go:141] libmachine: (addons-230451)       <target type='serial' port='0'/>
	I0923 10:21:55.389746   11896 main.go:141] libmachine: (addons-230451)     </console>
	I0923 10:21:55.389753   11896 main.go:141] libmachine: (addons-230451)     <rng model='virtio'>
	I0923 10:21:55.389772   11896 main.go:141] libmachine: (addons-230451)       <backend model='random'>/dev/random</backend>
	I0923 10:21:55.389789   11896 main.go:141] libmachine: (addons-230451)     </rng>
	I0923 10:21:55.389804   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389813   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389825   11896 main.go:141] libmachine: (addons-230451)   </devices>
	I0923 10:21:55.389833   11896 main.go:141] libmachine: (addons-230451) </domain>
	I0923 10:21:55.389840   11896 main.go:141] libmachine: (addons-230451) 
	I0923 10:21:55.442274   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:1e:65:9c in network default
	I0923 10:21:55.442896   11896 main.go:141] libmachine: (addons-230451) Ensuring networks are active...
	I0923 10:21:55.442919   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:55.443620   11896 main.go:141] libmachine: (addons-230451) Ensuring network default is active
	I0923 10:21:55.443936   11896 main.go:141] libmachine: (addons-230451) Ensuring network mk-addons-230451 is active
	I0923 10:21:55.444473   11896 main.go:141] libmachine: (addons-230451) Getting domain xml...
	I0923 10:21:55.445327   11896 main.go:141] libmachine: (addons-230451) Creating domain...
	I0923 10:21:57.016016   11896 main.go:141] libmachine: (addons-230451) Waiting to get IP...
	I0923 10:21:57.016667   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.017033   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.017054   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.017010   11918 retry.go:31] will retry after 208.635315ms: waiting for machine to come up
	I0923 10:21:57.227392   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.227733   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.227756   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.227648   11918 retry.go:31] will retry after 297.216389ms: waiting for machine to come up
	I0923 10:21:57.526245   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.526673   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.526694   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.526643   11918 retry.go:31] will retry after 293.828552ms: waiting for machine to come up
	I0923 10:21:57.822073   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.822442   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.822463   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.822410   11918 retry.go:31] will retry after 602.044959ms: waiting for machine to come up
	I0923 10:21:58.425996   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:58.426504   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:58.426525   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:58.426453   11918 retry.go:31] will retry after 610.746842ms: waiting for machine to come up
	I0923 10:21:59.039341   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:59.039865   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:59.039886   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:59.039817   11918 retry.go:31] will retry after 688.678666ms: waiting for machine to come up
	I0923 10:21:59.730224   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:59.730635   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:59.730660   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:59.730596   11918 retry.go:31] will retry after 1.028645485s: waiting for machine to come up
	I0923 10:22:00.760735   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:00.761163   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:00.761193   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:00.761110   11918 retry.go:31] will retry after 973.08502ms: waiting for machine to come up
	I0923 10:22:01.735437   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:01.735826   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:01.735858   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:01.735768   11918 retry.go:31] will retry after 1.395648774s: waiting for machine to come up
	I0923 10:22:03.134422   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:03.134826   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:03.134854   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:03.134760   11918 retry.go:31] will retry after 1.707966873s: waiting for machine to come up
	I0923 10:22:04.844605   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:04.845022   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:04.845045   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:04.844996   11918 retry.go:31] will retry after 2.702470731s: waiting for machine to come up
	I0923 10:22:07.550535   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:07.550864   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:07.550880   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:07.550829   11918 retry.go:31] will retry after 2.889295682s: waiting for machine to come up
	I0923 10:22:10.441287   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:10.441659   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:10.441679   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:10.441632   11918 retry.go:31] will retry after 2.869623302s: waiting for machine to come up
	I0923 10:22:13.314625   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:13.315023   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:13.315045   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:13.314983   11918 retry.go:31] will retry after 3.640221936s: waiting for machine to come up
	I0923 10:22:16.958659   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:16.959119   11896 main.go:141] libmachine: (addons-230451) Found IP for machine: 192.168.39.142
	I0923 10:22:16.959156   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has current primary IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:16.959166   11896 main.go:141] libmachine: (addons-230451) Reserving static IP address...
	I0923 10:22:16.959462   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find host DHCP lease matching {name: "addons-230451", mac: "52:54:00:23:7b:36", ip: "192.168.39.142"} in network mk-addons-230451
	I0923 10:22:17.029441   11896 main.go:141] libmachine: (addons-230451) DBG | Getting to WaitForSSH function...
	I0923 10:22:17.029468   11896 main.go:141] libmachine: (addons-230451) Reserved static IP address: 192.168.39.142
	I0923 10:22:17.029481   11896 main.go:141] libmachine: (addons-230451) Waiting for SSH to be available...
	I0923 10:22:17.031574   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.031976   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.032008   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.032179   11896 main.go:141] libmachine: (addons-230451) DBG | Using SSH client type: external
	I0923 10:22:17.032208   11896 main.go:141] libmachine: (addons-230451) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa (-rw-------)
	I0923 10:22:17.032242   11896 main.go:141] libmachine: (addons-230451) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:22:17.032261   11896 main.go:141] libmachine: (addons-230451) DBG | About to run SSH command:
	I0923 10:22:17.032275   11896 main.go:141] libmachine: (addons-230451) DBG | exit 0
	I0923 10:22:17.165353   11896 main.go:141] libmachine: (addons-230451) DBG | SSH cmd err, output: <nil>: 
	I0923 10:22:17.165603   11896 main.go:141] libmachine: (addons-230451) KVM machine creation complete!
	I0923 10:22:17.165853   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:22:17.166404   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:17.166615   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:17.166760   11896 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:22:17.166775   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:17.167984   11896 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:22:17.167997   11896 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:22:17.168002   11896 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:22:17.168007   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.170262   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.170628   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.170654   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.170753   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.170943   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.171091   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.171216   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.171352   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.171523   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.171532   11896 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:22:17.276650   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:17.276675   11896 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:22:17.276682   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.279238   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.279568   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.279618   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.279725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.279902   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.280049   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.280188   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.280328   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.280526   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.280539   11896 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:22:17.390222   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:22:17.390295   11896 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:22:17.390302   11896 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:22:17.390309   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.390534   11896 buildroot.go:166] provisioning hostname "addons-230451"
	I0923 10:22:17.390564   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.390733   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.393254   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.393637   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.393661   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.393806   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.393974   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.394097   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.394266   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.394503   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.394674   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.394685   11896 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-230451 && echo "addons-230451" | sudo tee /etc/hostname
	I0923 10:22:17.515225   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-230451
	
	I0923 10:22:17.515256   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.517989   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.518336   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.518363   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.518538   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.518711   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.518849   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.518973   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.519103   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.519305   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.519322   11896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-230451' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-230451/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-230451' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:22:17.634431   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:17.634459   11896 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:22:17.634507   11896 buildroot.go:174] setting up certificates
	I0923 10:22:17.634531   11896 provision.go:84] configureAuth start
	I0923 10:22:17.634546   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.634804   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:17.637289   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.637645   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.637672   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.637796   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.639619   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.639935   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.639958   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.640107   11896 provision.go:143] copyHostCerts
	I0923 10:22:17.640166   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:22:17.640266   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:22:17.640357   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:22:17.640412   11896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.addons-230451 san=[127.0.0.1 192.168.39.142 addons-230451 localhost minikube]
	I0923 10:22:17.714679   11896 provision.go:177] copyRemoteCerts
	I0923 10:22:17.714730   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:22:17.714753   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.717181   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.717480   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.717505   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.717645   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.717825   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.717941   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.718046   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:17.804191   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:22:17.829062   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:22:17.853034   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 10:22:17.877800   11896 provision.go:87] duration metric: took 243.235441ms to configureAuth
	I0923 10:22:17.877829   11896 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:22:17.877983   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:17.878058   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.880387   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.880814   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.880840   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.881030   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.881209   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.881361   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.881549   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.881728   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.881938   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.881960   11896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:22:18.112582   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:22:18.112611   11896 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:22:18.112619   11896 main.go:141] libmachine: (addons-230451) Calling .GetURL
	I0923 10:22:18.114015   11896 main.go:141] libmachine: (addons-230451) DBG | Using libvirt version 6000000
	I0923 10:22:18.115892   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.116172   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.116200   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.116375   11896 main.go:141] libmachine: Docker is up and running!
	I0923 10:22:18.116385   11896 main.go:141] libmachine: Reticulating splines...
	I0923 10:22:18.116393   11896 client.go:171] duration metric: took 23.522358813s to LocalClient.Create
	I0923 10:22:18.116418   11896 start.go:167] duration metric: took 23.522430116s to libmachine.API.Create "addons-230451"
	I0923 10:22:18.116432   11896 start.go:293] postStartSetup for "addons-230451" (driver="kvm2")
	I0923 10:22:18.116444   11896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:22:18.116465   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.116705   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:22:18.116725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.118667   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.118943   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.118966   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.119088   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.119236   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.119375   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.119475   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.203671   11896 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:22:18.207849   11896 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:22:18.207881   11896 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:22:18.207965   11896 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:22:18.208002   11896 start.go:296] duration metric: took 91.564102ms for postStartSetup
	I0923 10:22:18.208041   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:22:18.208600   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:18.210821   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.211132   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.211160   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.211370   11896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json ...
	I0923 10:22:18.211568   11896 start.go:128] duration metric: took 23.634978913s to createHost
	I0923 10:22:18.211597   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.213764   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.214103   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.214126   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.214261   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.214411   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.214520   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.214653   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.214811   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:18.214999   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:18.215010   11896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:22:18.322271   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727086938.296352149
	
	I0923 10:22:18.322297   11896 fix.go:216] guest clock: 1727086938.296352149
	I0923 10:22:18.322306   11896 fix.go:229] Guest: 2024-09-23 10:22:18.296352149 +0000 UTC Remote: 2024-09-23 10:22:18.211580004 +0000 UTC m=+23.734217766 (delta=84.772145ms)
	I0923 10:22:18.322326   11896 fix.go:200] guest clock delta is within tolerance: 84.772145ms
	I0923 10:22:18.322330   11896 start.go:83] releasing machines lock for "addons-230451", held for 23.74583569s
	I0923 10:22:18.322350   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.322592   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:18.325284   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.325621   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.325666   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.325767   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326263   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326436   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326529   11896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:22:18.326593   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.326632   11896 ssh_runner.go:195] Run: cat /version.json
	I0923 10:22:18.326655   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.329047   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329309   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329394   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.329418   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329575   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.329694   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.329721   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.329853   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.329920   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.329983   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.330068   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.330292   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.330417   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.438062   11896 ssh_runner.go:195] Run: systemctl --version
	I0923 10:22:18.444025   11896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:22:18.601874   11896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:22:18.607742   11896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:22:18.607802   11896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:18.624264   11896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:22:18.624289   11896 start.go:495] detecting cgroup driver to use...
	I0923 10:22:18.624345   11896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:22:18.639564   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:22:18.653568   11896 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:22:18.653621   11896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:22:18.667712   11896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:22:18.681874   11896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:22:18.792202   11896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:22:18.925990   11896 docker.go:233] disabling docker service ...
	I0923 10:22:18.926064   11896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:22:18.940378   11896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:22:18.953192   11896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:22:19.087815   11896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:22:19.203155   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:22:19.216978   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:22:19.235019   11896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:22:19.235096   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.245714   11896 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:22:19.245818   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.256490   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.267602   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.278326   11896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:22:19.289301   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.299699   11896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.317469   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.328378   11896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:22:19.338564   11896 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:22:19.338621   11896 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:22:19.352191   11896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:22:19.362359   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:19.484977   11896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:22:19.579332   11896 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:22:19.579411   11896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:22:19.584157   11896 start.go:563] Will wait 60s for crictl version
	I0923 10:22:19.584218   11896 ssh_runner.go:195] Run: which crictl
	I0923 10:22:19.587946   11896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:22:19.628720   11896 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:22:19.628857   11896 ssh_runner.go:195] Run: crio --version
	I0923 10:22:19.657600   11896 ssh_runner.go:195] Run: crio --version
	I0923 10:22:19.690821   11896 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:22:19.692029   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:19.694415   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:19.694719   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:19.694755   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:19.694901   11896 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:22:19.698798   11896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:19.711452   11896 kubeadm.go:883] updating cluster {Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:22:19.711550   11896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:19.711592   11896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:19.747339   11896 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 10:22:19.747410   11896 ssh_runner.go:195] Run: which lz4
	I0923 10:22:19.751336   11896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 10:22:19.755656   11896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 10:22:19.755687   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 10:22:21.047377   11896 crio.go:462] duration metric: took 1.296092639s to copy over tarball
	I0923 10:22:21.047452   11896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 10:22:23.149022   11896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.101536224s)
	I0923 10:22:23.149063   11896 crio.go:469] duration metric: took 2.101658311s to extract the tarball
	I0923 10:22:23.149074   11896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 10:22:23.186090   11896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:23.231874   11896 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:23.231895   11896 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:22:23.231902   11896 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.31.1 crio true true} ...
	I0923 10:22:23.231987   11896 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-230451 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:22:23.232047   11896 ssh_runner.go:195] Run: crio config
	I0923 10:22:23.284759   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:22:23.284784   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:22:23.284800   11896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:22:23.284832   11896 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-230451 NodeName:addons-230451 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:22:23.284967   11896 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-230451"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:22:23.285038   11896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:22:23.294894   11896 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:22:23.294968   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:22:23.304559   11896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 10:22:23.321682   11896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:22:23.338467   11896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 10:22:23.355102   11896 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0923 10:22:23.359077   11896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:23.371614   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:23.497716   11896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:23.524962   11896 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451 for IP: 192.168.39.142
	I0923 10:22:23.524985   11896 certs.go:194] generating shared ca certs ...
	I0923 10:22:23.525001   11896 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.525125   11896 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:22:23.653794   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt ...
	I0923 10:22:23.653826   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt: {Name:mk0d92c2a9963fcf15ffb070721c588192e7736e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.653986   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key ...
	I0923 10:22:23.653996   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key: {Name:mkeb4e4ef8ef3c516f46598d48867c8293e2d97b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.654085   11896 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:22:23.786686   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt ...
	I0923 10:22:23.786718   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt: {Name:mk4094838d6b10d87fe353fc7ecb8f6c0f591232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.786881   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key ...
	I0923 10:22:23.786892   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key: {Name:mkae41c92d5aff93d9eaa4a90706202e465fd08d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.786960   11896 certs.go:256] generating profile certs ...
	I0923 10:22:23.787011   11896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key
	I0923 10:22:23.787024   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt with IP's: []
	I0923 10:22:24.040672   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt ...
	I0923 10:22:24.040705   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: {Name:mk12ca8a37f255852c15957acdaaac5803f6db08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.040873   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key ...
	I0923 10:22:24.040883   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key: {Name:mk5ec5d734cc6123b964d4a8aa27ee9625037ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.040949   11896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89
	I0923 10:22:24.040966   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.142]
	I0923 10:22:24.248598   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 ...
	I0923 10:22:24.248628   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89: {Name:mk9332743467473c4d78e8a673a2ddc310d8086b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.248782   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89 ...
	I0923 10:22:24.248794   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89: {Name:mk563d416f16b853b493dbf6317b9fb699d8141e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.248878   11896 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt
	I0923 10:22:24.248949   11896 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key
	I0923 10:22:24.248994   11896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key
	I0923 10:22:24.249010   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt with IP's: []
	I0923 10:22:24.333105   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt ...
	I0923 10:22:24.333135   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt: {Name:mk1c36ccdfe89e6949c41221860582d71d9abecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.333299   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key ...
	I0923 10:22:24.333309   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key: {Name:mk001f630ca2a3ebb6948b9fe6cbe0a137191074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.333516   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:22:24.333586   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:22:24.333624   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:22:24.333649   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:22:24.334174   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:22:24.364904   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:22:24.389692   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:22:24.413480   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:22:24.437332   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:22:24.463620   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:22:24.489652   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:22:24.515979   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:22:24.542229   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:22:24.568853   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:22:24.589287   11896 ssh_runner.go:195] Run: openssl version
	I0923 10:22:24.596782   11896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:22:24.607940   11896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.612566   11896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.612615   11896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.618835   11896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:22:24.629990   11896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:22:24.634389   11896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:22:24.634449   11896 kubeadm.go:392] StartCluster: {Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:22:24.634545   11896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:22:24.634624   11896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:22:24.674296   11896 cri.go:89] found id: ""
	I0923 10:22:24.674376   11896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:22:24.684623   11896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:22:24.695036   11896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:22:24.707226   11896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:22:24.707249   11896 kubeadm.go:157] found existing configuration files:
	
	I0923 10:22:24.707293   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:22:24.716855   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:22:24.716917   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:22:24.727043   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:22:24.736874   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:22:24.736946   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:22:24.746697   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:22:24.756313   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:22:24.756377   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:22:24.766227   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:22:24.775698   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:22:24.775768   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:22:24.786611   11896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:22:24.838767   11896 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:22:24.838821   11896 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:22:24.940902   11896 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:22:24.941087   11896 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:22:24.941212   11896 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:22:24.948875   11896 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:22:25.257696   11896 out.go:235]   - Generating certificates and keys ...
	I0923 10:22:25.257801   11896 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:22:25.257881   11896 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:22:25.257985   11896 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:22:25.258096   11896 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:22:25.363288   11896 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:22:25.425568   11896 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:22:25.496334   11896 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:22:25.496516   11896 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-230451 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0923 10:22:25.661761   11896 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:22:25.661907   11896 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-230451 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0923 10:22:25.727123   11896 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:22:25.906579   11896 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:22:25.974535   11896 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:22:25.974623   11896 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:22:26.123945   11896 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:22:26.269690   11896 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:22:26.518592   11896 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:22:26.597902   11896 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:22:26.831627   11896 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:22:26.832272   11896 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:22:26.836780   11896 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:22:26.838584   11896 out.go:235]   - Booting up control plane ...
	I0923 10:22:26.838682   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:22:26.838755   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:22:26.839231   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:22:26.853944   11896 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:22:26.861028   11896 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:22:26.861120   11896 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:22:26.983148   11896 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:22:26.983286   11896 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:22:27.483290   11896 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.847264ms
	I0923 10:22:27.483400   11896 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:22:32.981821   11896 kubeadm.go:310] [api-check] The API server is healthy after 5.502127762s
	I0923 10:22:32.994814   11896 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:22:33.013765   11896 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:22:33.046425   11896 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:22:33.046697   11896 kubeadm.go:310] [mark-control-plane] Marking the node addons-230451 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:22:33.059414   11896 kubeadm.go:310] [bootstrap-token] Using token: 2hvssy.27mbk5fz3uxysew6
	I0923 10:22:33.060728   11896 out.go:235]   - Configuring RBAC rules ...
	I0923 10:22:33.060856   11896 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:22:33.066668   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:22:33.078485   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:22:33.081626   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:22:33.087430   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:22:33.091457   11896 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:22:33.390136   11896 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:22:33.813952   11896 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:22:34.387868   11896 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:22:34.388882   11896 kubeadm.go:310] 
	I0923 10:22:34.388988   11896 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:22:34.388998   11896 kubeadm.go:310] 
	I0923 10:22:34.389127   11896 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:22:34.389143   11896 kubeadm.go:310] 
	I0923 10:22:34.389170   11896 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:22:34.389244   11896 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:22:34.389326   11896 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:22:34.389341   11896 kubeadm.go:310] 
	I0923 10:22:34.389420   11896 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:22:34.389431   11896 kubeadm.go:310] 
	I0923 10:22:34.389498   11896 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:22:34.389516   11896 kubeadm.go:310] 
	I0923 10:22:34.389562   11896 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:22:34.389676   11896 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:22:34.389782   11896 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:22:34.389792   11896 kubeadm.go:310] 
	I0923 10:22:34.389900   11896 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:22:34.389993   11896 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:22:34.390002   11896 kubeadm.go:310] 
	I0923 10:22:34.390104   11896 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hvssy.27mbk5fz3uxysew6 \
	I0923 10:22:34.390230   11896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 \
	I0923 10:22:34.390260   11896 kubeadm.go:310] 	--control-plane 
	I0923 10:22:34.390266   11896 kubeadm.go:310] 
	I0923 10:22:34.390390   11896 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:22:34.390400   11896 kubeadm.go:310] 
	I0923 10:22:34.390516   11896 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hvssy.27mbk5fz3uxysew6 \
	I0923 10:22:34.390643   11896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 
	I0923 10:22:34.391299   11896 kubeadm.go:310] W0923 10:22:24.818359     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:34.391630   11896 kubeadm.go:310] W0923 10:22:24.819029     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:34.391761   11896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:22:34.391794   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:22:34.391806   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:22:34.393547   11896 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:22:34.394830   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:22:34.412319   11896 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:22:34.431070   11896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:22:34.431130   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:34.431136   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-230451 minikube.k8s.io/updated_at=2024_09_23T10_22_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-230451 minikube.k8s.io/primary=true
	I0923 10:22:34.546608   11896 ops.go:34] apiserver oom_adj: -16
	I0923 10:22:34.546625   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:35.047328   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:35.546823   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:36.046794   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:36.547056   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:37.046889   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:37.547633   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:38.046761   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:38.547665   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:39.047581   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:39.133362   11896 kubeadm.go:1113] duration metric: took 4.702301784s to wait for elevateKubeSystemPrivileges
	I0923 10:22:39.133409   11896 kubeadm.go:394] duration metric: took 14.498964743s to StartCluster
	I0923 10:22:39.133426   11896 settings.go:142] acquiring lock: {Name:mka0fc37129eef8f35af2c1a6ddc567156410b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:39.133569   11896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:22:39.133997   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/kubeconfig: {Name:mk40a9897a5577a89be748f874c2066abd769fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:39.134254   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:22:39.134262   11896 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:22:39.134340   11896 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:22:39.134490   11896 addons.go:69] Setting yakd=true in profile "addons-230451"
	I0923 10:22:39.134508   11896 addons.go:234] Setting addon yakd=true in "addons-230451"
	I0923 10:22:39.134521   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:39.134537   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134577   11896 addons.go:69] Setting inspektor-gadget=true in profile "addons-230451"
	I0923 10:22:39.134590   11896 addons.go:234] Setting addon inspektor-gadget=true in "addons-230451"
	I0923 10:22:39.134616   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134702   11896 addons.go:69] Setting storage-provisioner=true in profile "addons-230451"
	I0923 10:22:39.134726   11896 addons.go:234] Setting addon storage-provisioner=true in "addons-230451"
	I0923 10:22:39.134749   11896 addons.go:69] Setting registry=true in profile "addons-230451"
	I0923 10:22:39.135058   11896 addons.go:234] Setting addon registry=true in "addons-230451"
	I0923 10:22:39.135093   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134729   11896 addons.go:69] Setting cloud-spanner=true in profile "addons-230451"
	I0923 10:22:39.134732   11896 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-230451"
	I0923 10:22:39.135178   11896 addons.go:69] Setting volcano=true in profile "addons-230451"
	I0923 10:22:39.135163   11896 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-230451"
	I0923 10:22:39.135195   11896 addons.go:234] Setting addon volcano=true in "addons-230451"
	I0923 10:22:39.135209   11896 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-230451"
	I0923 10:22:39.135225   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135226   11896 addons.go:69] Setting volumesnapshots=true in profile "addons-230451"
	I0923 10:22:39.135243   11896 addons.go:234] Setting addon volumesnapshots=true in "addons-230451"
	I0923 10:22:39.135269   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134757   11896 addons.go:69] Setting metrics-server=true in profile "addons-230451"
	I0923 10:22:39.135294   11896 addons.go:234] Setting addon metrics-server=true in "addons-230451"
	I0923 10:22:39.135313   11896 addons.go:234] Setting addon cloud-spanner=true in "addons-230451"
	I0923 10:22:39.135037   11896 addons.go:69] Setting default-storageclass=true in profile "addons-230451"
	I0923 10:22:39.135326   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135334   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135346   11896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-230451"
	I0923 10:22:39.135361   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135745   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.135322   11896 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-230451"
	I0923 10:22:39.135770   11896 addons.go:69] Setting ingress-dns=true in profile "addons-230451"
	I0923 10:22:39.135775   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.135782   11896 addons.go:234] Setting addon ingress-dns=true in "addons-230451"
	I0923 10:22:39.135791   11896 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-230451"
	I0923 10:22:39.135814   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135811   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135827   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.135864   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.136234   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.136268   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.136281   11896 addons.go:69] Setting gcp-auth=true in profile "addons-230451"
	I0923 10:22:39.136303   11896 mustload.go:65] Loading cluster: addons-230451
	I0923 10:22:39.136368   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.136406   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.134746   11896 addons.go:69] Setting ingress=true in profile "addons-230451"
	I0923 10:22:39.136467   11896 addons.go:234] Setting addon ingress=true in "addons-230451"
	I0923 10:22:39.136921   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:39.137052   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137087   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137214   11896 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-230451"
	I0923 10:22:39.137372   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137507   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137538   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137549   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137614   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.137976   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137511   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.138578   11896 out.go:177] * Verifying Kubernetes components...
	I0923 10:22:39.139899   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:39.145488   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145585   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145613   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145654   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145676   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145800   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145841   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145871   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145891   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145914   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145918   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145952   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145983   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.161544   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0923 10:22:39.161884   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0923 10:22:39.162070   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0923 10:22:39.162264   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.162826   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.162851   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.162936   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.163040   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.163434   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.163454   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.163580   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0923 10:22:39.163764   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.163788   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.163840   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.163934   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I0923 10:22:39.164104   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.164684   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.164721   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.185510   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I0923 10:22:39.185571   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.185662   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.185706   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.185909   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.185926   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.186778   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.186932   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.186951   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.187346   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187387   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.187436   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187463   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.187522   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.187703   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187731   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.192887   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.193023   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.201290   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.201305   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.201348   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.201820   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.201838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.201956   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.201993   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.202335   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.229941   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I0923 10:22:39.229953   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0923 10:22:39.229981   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0923 10:22:39.230081   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32827
	I0923 10:22:39.229945   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43993
	I0923 10:22:39.230091   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0923 10:22:39.230158   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0923 10:22:39.230232   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0923 10:22:39.230239   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0923 10:22:39.230393   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.230446   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.231158   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231163   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231251   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231315   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231351   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231380   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231777   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0923 10:22:39.231833   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.231847   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.231916   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231949   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.232175   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232191   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232195   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232209   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232317   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232328   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232431   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232446   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232586   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232645   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232647   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232657   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232731   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232765   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232769   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232778   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232780   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232793   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232834   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.233524   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233547   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233528   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233605   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233669   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.233682   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.233731   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.233898   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.233933   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.233988   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234016   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234116   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.234147   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234176   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234491   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.234491   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234526   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.234552   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234889   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234926   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.235293   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.235441   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.236819   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.236838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.237864   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.238168   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.238717   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.240479   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.240843   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.240799   11896 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-230451"
	I0923 10:22:39.240943   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.241475   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:39.241513   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:39.241572   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.241620   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.241673   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:39.241694   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:39.241712   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:39.241728   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:39.241939   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:39.241966   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:39.241981   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 10:22:39.242061   11896 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 10:22:39.242209   11896 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:22:39.243364   11896 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:39.243382   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:22:39.243400   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.243621   11896 addons.go:234] Setting addon default-storageclass=true in "addons-230451"
	I0923 10:22:39.243659   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.244006   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.244048   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.245011   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:22:39.245411   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0923 10:22:39.245745   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.246261   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.246280   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.246342   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.246653   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.246702   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.246763   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.246918   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.247079   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.247234   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.247287   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.247413   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.248325   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:22:39.249556   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:22:39.250623   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:22:39.251623   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:22:39.252410   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I0923 10:22:39.252964   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.253331   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.253997   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:22:39.254684   11896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:22:39.255992   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.256016   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.256228   11896 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:39.256248   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:22:39.256266   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.256781   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:22:39.257114   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.258716   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:22:39.259215   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.259570   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.259591   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.259735   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:22:39.259749   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:22:39.259767   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.259814   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.259944   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.260065   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.260176   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.262079   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0923 10:22:39.262584   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.262683   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.263031   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.263060   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.263202   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.263213   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.263419   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.263572   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.263624   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.264175   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.264214   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.264455   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.264597   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.265940   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.265968   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.271246   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38035
	I0923 10:22:39.271789   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.272388   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.272405   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.272805   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.273028   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.274894   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0923 10:22:39.275213   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.275844   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.275867   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.276203   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.278018   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42367
	I0923 10:22:39.278347   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0923 10:22:39.278503   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.278767   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0923 10:22:39.278898   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.279182   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.279681   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.279702   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.279763   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.280273   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.280289   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.280330   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.280582   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.280689   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.280918   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.281367   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.281152   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0923 10:22:39.281714   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.281734   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.281796   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.281834   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.282057   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.282159   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.282388   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.282544   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.282560   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.282678   11896 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:22:39.283012   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.283243   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.283634   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:22:39.283650   11896 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:22:39.283668   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.283893   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.285400   11896 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:22:39.286497   11896 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:22:39.286503   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.286515   11896 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:22:39.286544   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.286846   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.286869   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.287301   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.287493   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.287665   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.287806   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.288302   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.288696   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0923 10:22:39.289083   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.289683   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.289701   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.290084   11896 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:22:39.290241   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.290292   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.290473   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.290735   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.290773   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.290925   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.291070   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.291212   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.291343   11896 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:39.291363   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:22:39.291378   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.291451   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.295024   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.295024   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.295085   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.295103   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.295534   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.295687   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.295814   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.297105   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.297670   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0923 10:22:39.297670   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0923 10:22:39.298051   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.298086   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.298472   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.298495   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0923 10:22:39.298498   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.298662   11896 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:22:39.298748   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.298766   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.298991   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.299054   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.299408   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.299577   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300091   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300214   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.300223   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.300609   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.300821   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300911   11896 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:22:39.301783   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.301909   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.301978   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I0923 10:22:39.302139   11896 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:22:39.302152   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:22:39.302178   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.302381   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.302852   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.302875   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.302984   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.303301   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.303431   11896 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:22:39.303515   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:22:39.303574   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.304688   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:22:39.304717   11896 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:22:39.304740   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.304744   11896 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:39.304807   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:22:39.304819   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.305822   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:39.307556   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0923 10:22:39.307586   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.307720   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:22:39.307774   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.308043   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.308066   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.308423   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.308972   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.309094   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.309127   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.308530   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.309353   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.309801   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.309838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.310129   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.310151   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.310205   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:39.310257   11896 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:22:39.310305   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.310367   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.310501   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.310551   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.310650   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.310779   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.311023   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.311548   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.311571   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.311666   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.311778   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:22:39.311805   11896 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:22:39.311825   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.311915   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.312185   11896 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:39.312202   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:22:39.312219   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.312343   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I0923 10:22:39.312499   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.312659   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.312900   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.312942   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.313158   11896 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:39.313227   11896 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:22:39.313245   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.313364   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.313398   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.313741   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.313923   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.315763   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.315810   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.316253   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.316283   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.316514   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.316662   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.316765   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.316924   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.317045   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.317358   11896 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:22:39.317533   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.317571   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.317710   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.317848   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.317973   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.318106   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.318191   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.318580   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.318598   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.318878   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.319048   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.319206   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.319289   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.320204   11896 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:22:39.321465   11896 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:39.321479   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:22:39.321491   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.323996   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.324361   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.324386   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.324495   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.324602   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.324711   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.324788   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	W0923 10:22:39.325511   11896 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50144->192.168.39.142:22: read: connection reset by peer
	I0923 10:22:39.325542   11896 retry.go:31] will retry after 146.678947ms: ssh: handshake failed: read tcp 192.168.39.1:50144->192.168.39.142:22: read: connection reset by peer
	I0923 10:22:39.557159   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:39.580915   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:22:39.580948   11896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:39.596569   11896 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:22:39.596596   11896 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:22:39.610676   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:39.621265   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:39.641318   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:39.653920   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:39.688552   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:22:39.688582   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:22:39.695267   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:22:39.695299   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:22:39.700872   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:39.701278   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:22:39.701293   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:22:39.730612   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:22:39.730640   11896 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:22:39.741177   11896 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:22:39.741202   11896 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:22:39.775359   11896 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:22:39.775388   11896 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:22:39.777672   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:39.829748   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:22:39.829779   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:22:39.845681   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:22:39.845709   11896 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:22:39.868956   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:22:39.868979   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:22:39.878049   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:22:39.878072   11896 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:22:39.910637   11896 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:22:39.910662   11896 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:22:39.925074   11896 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:39.925100   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:22:39.964060   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:22:39.964082   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:22:40.059843   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:40.059864   11896 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:22:40.073448   11896 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:22:40.073471   11896 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:22:40.094580   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:22:40.094602   11896 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:22:40.102412   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:22:40.102434   11896 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:22:40.111856   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:22:40.111870   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:22:40.149555   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:40.244365   11896 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:22:40.244393   11896 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:22:40.286452   11896 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:40.286479   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:22:40.301058   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:40.319790   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:40.319818   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:22:40.395452   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:22:40.395478   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:22:40.420594   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:40.465580   11896 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:22:40.465611   11896 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:22:40.517028   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:40.586224   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:22:40.586264   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:22:40.716640   11896 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:40.716667   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:22:40.864786   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:22:40.864809   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:22:40.974629   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:41.329483   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:22:41.329520   11896 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:22:41.615715   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:22:41.615746   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:22:41.850585   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:22:41.850616   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:22:42.139510   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:42.139536   11896 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:22:42.203522   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.646323739s)
	I0923 10:22:42.203571   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.203579   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.203637   11896 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.62266543s)
	I0923 10:22:42.203652   11896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.622706839s)
	I0923 10:22:42.203673   11896 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 10:22:42.203984   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.204037   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.204051   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.204059   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.204072   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.204292   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.204308   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.204357   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.204648   11896 node_ready.go:35] waiting up to 6m0s for node "addons-230451" to be "Ready" ...
	I0923 10:22:42.265962   11896 node_ready.go:49] node "addons-230451" has status "Ready":"True"
	I0923 10:22:42.265985   11896 node_ready.go:38] duration metric: took 61.313529ms for node "addons-230451" to be "Ready" ...
	I0923 10:22:42.265995   11896 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:22:42.382117   11896 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:42.433215   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:42.639353   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.028639151s)
	I0923 10:22:42.639403   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639414   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639437   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.018135683s)
	I0923 10:22:42.639481   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639496   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639513   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.99816104s)
	I0923 10:22:42.639574   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639591   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639699   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.639710   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.639718   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639731   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639808   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.639885   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.639923   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.639930   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.639937   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639944   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.640007   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.640014   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.640168   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.640182   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.641237   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.641246   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.641258   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.641266   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.641730   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.641744   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.815687   11896 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-230451" context rescaled to 1 replicas
	I0923 10:22:42.853390   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.853416   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.853662   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.853720   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:44.448550   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:46.283789   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:22:46.283834   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:46.286793   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.287202   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:46.287227   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.287394   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:46.287553   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:46.287738   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:46.287873   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:46.555575   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:22:46.623519   11896 addons.go:234] Setting addon gcp-auth=true in "addons-230451"
	I0923 10:22:46.623584   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:46.624001   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:46.624048   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:46.639512   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0923 10:22:46.639966   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:46.640495   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:46.640515   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:46.640853   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:46.641315   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:46.641348   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:46.656710   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0923 10:22:46.657190   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:46.657684   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:46.657706   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:46.658044   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:46.658273   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:46.659892   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:46.660080   11896 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:22:46.660106   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:46.662909   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.663305   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:46.663330   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.663560   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:46.663699   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:46.663835   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:46.663965   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:47.013493   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:47.307143   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.606234939s)
	I0923 10:22:47.307203   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307215   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307214   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.5295194s)
	I0923 10:22:47.307233   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307245   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307246   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.653288375s)
	I0923 10:22:47.307261   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.157672592s)
	I0923 10:22:47.307296   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307296   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307316   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307318   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307367   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.006265482s)
	I0923 10:22:47.307413   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307416   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.886776853s)
	I0923 10:22:47.307425   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307441   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307452   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307512   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.790448754s)
	W0923 10:22:47.307537   11896 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:47.307568   11896 retry.go:31] will retry after 312.840585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:47.307652   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.332993076s)
	I0923 10:22:47.307672   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307694   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307874   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.307912   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.307930   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307936   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307954   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.307957   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307963   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307966   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.307973   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307977   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307984   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307941   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308023   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308030   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308072   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308075   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308102   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308105   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308114   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308121   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308128   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308132   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308135   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308138   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308142   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308145   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308165   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308177   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308185   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308191   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.309012   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309037   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309044   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309052   11896 addons.go:475] Verifying addon registry=true in "addons-230451"
	I0923 10:22:47.309241   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309250   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309257   11896 addons.go:475] Verifying addon metrics-server=true in "addons-230451"
	I0923 10:22:47.309419   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309453   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309460   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309479   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309499   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309736   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309772   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309779   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.310028   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.310059   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.310066   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.311116   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.311130   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.311151   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.311171   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.312036   11896 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-230451 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:22:47.312654   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.312668   11896 out.go:177] * Verifying registry addon...
	I0923 10:22:47.312738   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.312748   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.312802   11896 addons.go:475] Verifying addon ingress=true in "addons-230451"
	I0923 10:22:47.313891   11896 out.go:177] * Verifying ingress addon...
	I0923 10:22:47.314808   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:22:47.315984   11896 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:22:47.333135   11896 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:22:47.333156   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.333672   11896 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:22:47.333694   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.362191   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.362210   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.362500   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.362519   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.620787   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:47.853958   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.854430   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.976575   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.543318151s)
	I0923 10:22:47.976615   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.976627   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.976662   11896 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.31655795s)
	I0923 10:22:47.976916   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.976936   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.976944   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.976951   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.977493   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.977493   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.977516   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.977530   11896 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-230451"
	I0923 10:22:47.978353   11896 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:22:47.979244   11896 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:22:47.980816   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:47.981547   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:22:47.981951   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:22:47.981965   11896 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:22:48.012863   11896 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:22:48.012883   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.081072   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:22:48.081094   11896 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:22:48.235021   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:48.235041   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:22:48.323476   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.325316   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:48.329262   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.487988   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.823283   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.823712   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.987157   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.319059   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.320824   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.394285   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:49.486336   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.828379   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.845245   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.018644   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.230146   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.609312903s)
	I0923 10:22:50.230207   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230224   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230234   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.904884388s)
	I0923 10:22:50.230272   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230290   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230489   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230525   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230539   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230546   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230590   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230616   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230653   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230664   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230671   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230801   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230830   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230834   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230842   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230852   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230861   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.232850   11896 addons.go:475] Verifying addon gcp-auth=true in "addons-230451"
	I0923 10:22:50.234749   11896 out.go:177] * Verifying gcp-auth addon...
	I0923 10:22:50.236715   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:22:50.240230   11896 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:22:50.240245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.341082   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.341419   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.485879   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.741139   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.819391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.822087   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.987076   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.240553   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.318867   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.320884   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.487367   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.740284   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.818704   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.821561   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.888695   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:51.986219   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.241303   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.320629   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.321209   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.486705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.740428   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.819857   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.820725   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.986468   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.241277   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.318492   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.320484   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.520510   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.969717   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.974986   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.975544   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.977863   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:53.986625   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.240759   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.320774   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.321373   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.486278   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.740966   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.819228   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.822185   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.986658   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.240365   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.318431   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.320427   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.486106   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.740761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.823261   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.825324   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.989815   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.241561   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.320639   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.320643   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.388229   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:56.487473   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.740723   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.819638   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.821374   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.986618   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.241599   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.319347   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.320708   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.486908   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.740748   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.820700   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.820754   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.987523   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.239942   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.319913   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.320838   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.389727   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:58.488040   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.741176   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.818677   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.819952   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.986499   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.240344   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.319170   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.321183   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.486469   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.740550   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.819952   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.823020   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.986806   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.240835   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.319990   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.321306   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.486611   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.740067   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.820118   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.821668   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.889293   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:00.986752   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.240810   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.321217   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.321511   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.486551   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.741019   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.819706   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.820249   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.986133   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.240968   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.319524   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.322199   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.493692   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.740885   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.819358   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.821237   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.224620   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.337753   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.338071   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.338115   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.387890   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:03.485468   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.739963   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.820105   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.820454   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.986601   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.240576   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.321031   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.321397   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.485628   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.007814   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.008134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.008442   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.011226   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.260975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.320236   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.321513   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.389023   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:05.487041   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.740227   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.818341   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.819725   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.986304   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.240486   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.318856   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.321629   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.486680   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.740290   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.820149   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.820293   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.986074   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.240910   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.319345   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.320504   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.485787   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.740373   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.820179   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.821686   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.888632   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:07.986582   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.239642   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.319453   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.321440   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.486021   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.741278   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.818653   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.820061   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.987104   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.242250   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.319190   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.320606   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.487395   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.740299   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.818478   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.820810   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.985704   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.240100   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.318707   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.320481   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.391013   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:10.486242   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.740836   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.819488   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.820601   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.986709   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.241401   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.318575   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.320781   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.486517   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.740599   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.819000   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.820650   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.985664   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.241013   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.320039   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.320366   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.486654   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.740430   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.819149   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.821095   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.887785   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:12.986107   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.241268   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.318846   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.320609   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.486601   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.740348   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.819265   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.820668   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.986922   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.240485   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.320070   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:14.320544   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.910906   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.923120   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:15.012269   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.012603   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.012605   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.013481   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.241391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.342450   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.342933   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.487968   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.741013   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.819807   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.820519   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.986818   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.240849   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.318613   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.319887   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.486621   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.741530   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.818963   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.820103   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.986250   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.241331   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.318639   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.319759   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.388335   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:17.486169   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.740440   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.818651   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.820082   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.986722   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.240851   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.319266   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.321957   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.486827   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.749479   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.818898   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.819965   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.986655   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.353395   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.353455   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.353980   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.388491   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:19.486286   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.740811   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.819265   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.821465   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.987794   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.241615   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.343341   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.345086   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.485876   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.741706   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.822445   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.822885   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.986251   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.241243   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.342973   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.343648   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.388636   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:21.486389   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.741586   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.820057   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.820872   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.986245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.240821   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.321008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.321506   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.487367   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.746761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.845229   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.845516   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.889257   11896 pod_ready.go:93] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.889286   11896 pod_ready.go:82] duration metric: took 40.507126685s for pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.889299   11896 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.891229   11896 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kvrjl" not found
	I0923 10:23:22.891254   11896 pod_ready.go:82] duration metric: took 1.946573ms for pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace to be "Ready" ...
	E0923 10:23:22.891266   11896 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kvrjl" not found
	I0923 10:23:22.891274   11896 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.899549   11896 pod_ready.go:93] pod "etcd-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.899575   11896 pod_ready.go:82] duration metric: took 8.292332ms for pod "etcd-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.899586   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.906049   11896 pod_ready.go:93] pod "kube-apiserver-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.906074   11896 pod_ready.go:82] duration metric: took 6.480206ms for pod "kube-apiserver-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.906086   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.910833   11896 pod_ready.go:93] pod "kube-controller-manager-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.910859   11896 pod_ready.go:82] duration metric: took 4.764833ms for pod "kube-controller-manager-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.910872   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2f5tn" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.986668   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.089873   11896 pod_ready.go:93] pod "kube-proxy-2f5tn" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.089900   11896 pod_ready.go:82] duration metric: took 179.019892ms for pod "kube-proxy-2f5tn" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.089912   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.241038   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.320388   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.322190   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.486569   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.487599   11896 pod_ready.go:93] pod "kube-scheduler-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.487631   11896 pod_ready.go:82] duration metric: took 397.7086ms for pod "kube-scheduler-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.487644   11896 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.740324   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.818859   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.819999   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.886465   11896 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.886497   11896 pod_ready.go:82] duration metric: took 398.839138ms for pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.886507   11896 pod_ready.go:39] duration metric: took 41.620501569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:23:23.886523   11896 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:23:23.886570   11896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:23:23.914996   11896 api_server.go:72] duration metric: took 44.780704115s to wait for apiserver process to appear ...
	I0923 10:23:23.915024   11896 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:23:23.915046   11896 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0923 10:23:23.920072   11896 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0923 10:23:23.921132   11896 api_server.go:141] control plane version: v1.31.1
	I0923 10:23:23.921159   11896 api_server.go:131] duration metric: took 6.126816ms to wait for apiserver health ...
	I0923 10:23:23.921169   11896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:23:24.437367   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.437846   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.438079   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.438323   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.442864   11896 system_pods.go:59] 17 kube-system pods found
	I0923 10:23:24.442893   11896 system_pods.go:61] "coredns-7c65d6cfc9-7mfbw" [04d690db-b3f4-4949-ba3f-7bd3a74f4eb6] Running
	I0923 10:23:24.442904   11896 system_pods.go:61] "csi-hostpath-attacher-0" [215bba0a-54bf-45ec-a6cd-92f89ad62dac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:23:24.442914   11896 system_pods.go:61] "csi-hostpath-resizer-0" [651d7af5-c66c-4a47-a274-97f99744e66e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:23:24.442930   11896 system_pods.go:61] "csi-hostpathplugin-8mdng" [e1e36834-e18e-4390-bb18-a360cde6394c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:23:24.442939   11896 system_pods.go:61] "etcd-addons-230451" [0e8cdf9c-cbce-459d-be1e-613c2a79cb79] Running
	I0923 10:23:24.442949   11896 system_pods.go:61] "kube-apiserver-addons-230451" [7916049b-c9ce-4de7-a7bc-4faa37c8ee80] Running
	I0923 10:23:24.442954   11896 system_pods.go:61] "kube-controller-manager-addons-230451" [68366320-29aa-47d0-a8d1-64cf99d3c206] Running
	I0923 10:23:24.442963   11896 system_pods.go:61] "kube-ingress-dns-minikube" [c962d61b-b651-40b4-b128-49b4f1966a46] Running
	I0923 10:23:24.442968   11896 system_pods.go:61] "kube-proxy-2f5tn" [ecde87e2-ab31-4b8b-9c74-67efa7870d45] Running
	I0923 10:23:24.442976   11896 system_pods.go:61] "kube-scheduler-addons-230451" [faeada60-3597-4fa5-bf52-c211a79bad29] Running
	I0923 10:23:24.442985   11896 system_pods.go:61] "metrics-server-84c5f94fbc-vx2z2" [e950a717-9855-4b25-82a8-ac71b9a3a180] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:23:24.442993   11896 system_pods.go:61] "nvidia-device-plugin-daemonset-t2lzg" [6608f635-89c8-4811-9dca-ae138dbe1bd9] Running
	I0923 10:23:24.443002   11896 system_pods.go:61] "registry-66c9cd494c-7z2xv" [71f47a69-a374-4586-8d8b-0ec84aeee203] Running
	I0923 10:23:24.443009   11896 system_pods.go:61] "registry-proxy-kwn7c" [fab26ceb-8538-4146-9f14-955f715b3dd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:23:24.443020   11896 system_pods.go:61] "snapshot-controller-56fcc65765-mtclj" [4d040c25-f747-448f-81e3-46dd810a9b80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.443030   11896 system_pods.go:61] "snapshot-controller-56fcc65765-zc5h7" [a8f9592b-9ae4-4ef5-aaeb-a421f92692bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.443039   11896 system_pods.go:61] "storage-provisioner" [c2bd96dc-bf5a-4a77-83f4-de923c76367f] Running
	I0923 10:23:24.443049   11896 system_pods.go:74] duration metric: took 521.872993ms to wait for pod list to return data ...
	I0923 10:23:24.443060   11896 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:23:24.445709   11896 default_sa.go:45] found service account: "default"
	I0923 10:23:24.445725   11896 default_sa.go:55] duration metric: took 2.659813ms for default service account to be created ...
	I0923 10:23:24.445731   11896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:23:24.486762   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.493551   11896 system_pods.go:86] 17 kube-system pods found
	I0923 10:23:24.493583   11896 system_pods.go:89] "coredns-7c65d6cfc9-7mfbw" [04d690db-b3f4-4949-ba3f-7bd3a74f4eb6] Running
	I0923 10:23:24.493595   11896 system_pods.go:89] "csi-hostpath-attacher-0" [215bba0a-54bf-45ec-a6cd-92f89ad62dac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:23:24.493604   11896 system_pods.go:89] "csi-hostpath-resizer-0" [651d7af5-c66c-4a47-a274-97f99744e66e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:23:24.493618   11896 system_pods.go:89] "csi-hostpathplugin-8mdng" [e1e36834-e18e-4390-bb18-a360cde6394c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:23:24.493625   11896 system_pods.go:89] "etcd-addons-230451" [0e8cdf9c-cbce-459d-be1e-613c2a79cb79] Running
	I0923 10:23:24.493633   11896 system_pods.go:89] "kube-apiserver-addons-230451" [7916049b-c9ce-4de7-a7bc-4faa37c8ee80] Running
	I0923 10:23:24.493642   11896 system_pods.go:89] "kube-controller-manager-addons-230451" [68366320-29aa-47d0-a8d1-64cf99d3c206] Running
	I0923 10:23:24.493650   11896 system_pods.go:89] "kube-ingress-dns-minikube" [c962d61b-b651-40b4-b128-49b4f1966a46] Running
	I0923 10:23:24.493658   11896 system_pods.go:89] "kube-proxy-2f5tn" [ecde87e2-ab31-4b8b-9c74-67efa7870d45] Running
	I0923 10:23:24.493666   11896 system_pods.go:89] "kube-scheduler-addons-230451" [faeada60-3597-4fa5-bf52-c211a79bad29] Running
	I0923 10:23:24.493677   11896 system_pods.go:89] "metrics-server-84c5f94fbc-vx2z2" [e950a717-9855-4b25-82a8-ac71b9a3a180] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:23:24.493685   11896 system_pods.go:89] "nvidia-device-plugin-daemonset-t2lzg" [6608f635-89c8-4811-9dca-ae138dbe1bd9] Running
	I0923 10:23:24.493693   11896 system_pods.go:89] "registry-66c9cd494c-7z2xv" [71f47a69-a374-4586-8d8b-0ec84aeee203] Running
	I0923 10:23:24.493704   11896 system_pods.go:89] "registry-proxy-kwn7c" [fab26ceb-8538-4146-9f14-955f715b3dd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:23:24.493716   11896 system_pods.go:89] "snapshot-controller-56fcc65765-mtclj" [4d040c25-f747-448f-81e3-46dd810a9b80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.493727   11896 system_pods.go:89] "snapshot-controller-56fcc65765-zc5h7" [a8f9592b-9ae4-4ef5-aaeb-a421f92692bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.493735   11896 system_pods.go:89] "storage-provisioner" [c2bd96dc-bf5a-4a77-83f4-de923c76367f] Running
	I0923 10:23:24.493746   11896 system_pods.go:126] duration metric: took 48.009337ms to wait for k8s-apps to be running ...
	I0923 10:23:24.493758   11896 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:23:24.493809   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:23:24.513529   11896 system_svc.go:56] duration metric: took 19.75998ms WaitForService to wait for kubelet
	I0923 10:23:24.513564   11896 kubeadm.go:582] duration metric: took 45.379276732s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:23:24.513588   11896 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:23:24.686932   11896 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:23:24.686965   11896 node_conditions.go:123] node cpu capacity is 2
	I0923 10:23:24.686977   11896 node_conditions.go:105] duration metric: took 173.384337ms to run NodePressure ...
	I0923 10:23:24.686989   11896 start.go:241] waiting for startup goroutines ...
	I0923 10:23:24.740644   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.819562   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.820700   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.987200   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.241300   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.343424   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.343684   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.488088   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.740686   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.823744   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.824711   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.986603   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.245648   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.319158   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.320408   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.486134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.741656   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.818867   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.820585   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.986548   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.240557   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.319023   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.320864   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.486855   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.740443   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.820340   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.820749   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.985688   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.240798   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.319348   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.320307   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.485922   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.740883   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.819269   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.821099   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.986140   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.241577   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.319821   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.320555   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.485837   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.739828   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.819216   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.820683   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.986090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.240500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.318390   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.320276   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.485561   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.740036   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.819427   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.820954   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.986481   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.242825   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.319201   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.321609   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.486421   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.740721   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.820745   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.821165   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.987716   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.240042   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.320623   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.320636   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.487536   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.740655   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.819092   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.820745   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.986500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.240919   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.319548   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.321128   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.486183   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.740178   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.818613   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.830934   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.234483   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.240705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.318188   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.321549   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.486252   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.741090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.818534   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.820864   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.986959   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.241200   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.318668   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.320010   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.487738   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.740755   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.846303   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.847461   11896 kapi.go:107] duration metric: took 48.532653767s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:23:35.986432   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.240073   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.320490   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.486975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.740607   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.821390   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.985931   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.240868   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.320823   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.486628   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.740321   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.819943   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.986559   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.240591   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.320406   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.485374   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.740067   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.821158   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.985749   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.241435   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.320711   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.487179   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.740799   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.820591   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.987098   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.239842   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.321547   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.485975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.740732   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.821115   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.985768   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.240307   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.320076   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.486615   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.739979   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.820446   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.985972   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.240670   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.320827   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.486416   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.740430   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.821019   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.986853   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.240848   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.320450   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.487018   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.740754   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.841792   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.986488   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.240295   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.320589   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.485911   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.741445   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.820755   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.987203   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.243595   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.320568   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.490033   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.740061   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.821180   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.988792   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.240043   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.320715   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.487369   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.740245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.819995   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.986874   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.243429   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.345068   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.489391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.740015   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.820624   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.992212   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.241134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.323440   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.486090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.740606   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.820802   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.991332   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.240530   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.417715   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.487512   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.742506   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.820524   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.986559   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.239803   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.320349   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.486994   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.741224   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.821593   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.986425   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.240567   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.320321   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.486405   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.740877   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.820749   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.986484   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.240827   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.320722   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.487461   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.740499   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.841584   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.986500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.241311   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.324855   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.487424   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.740118   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.824677   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.985851   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.240751   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.320803   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.487062   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.740218   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.831563   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.987830   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.240818   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.332865   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.501106   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.740363   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.822929   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.990443   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.241141   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.806895   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.807674   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.808159   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.820644   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.986084   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.241298   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.327433   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.487016   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.740517   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.820018   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.986945   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.240591   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.321016   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.487366   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.740865   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.820699   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.985850   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.479008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.479029   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.489051   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.741335   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.842531   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.986871   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.240003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.320593   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.487659   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.739808   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.824778   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.986705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.241008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.320728   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.486320   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.742003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.820606   11896 kapi.go:107] duration metric: took 1m14.504617876s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:24:01.986382   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.240173   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.510479   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.759085   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.989516   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.240478   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.486506   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.739595   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.987737   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.240394   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.485945   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.740361   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.987426   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.241017   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.486902   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.740789   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.986398   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.240422   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.488497   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.740174   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.986390   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.239997   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.486563   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.740856   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.985705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.239980   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.487157   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.740726   11896 kapi.go:107] duration metric: took 1m18.504006563s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:24:08.742218   11896 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-230451 cluster.
	I0923 10:24:08.743548   11896 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:24:08.744742   11896 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:24:08.986003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.487085   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.986761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.486537   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.996063   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.487998   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.986105   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.489482   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.986286   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.531021   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.985832   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.486937   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.988956   11896 kapi.go:107] duration metric: took 1m27.0074062s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:24:14.990655   11896 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, metrics-server, inspektor-gadget, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 10:24:14.991930   11896 addons.go:510] duration metric: took 1m35.857607898s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin default-storageclass metrics-server inspektor-gadget storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 10:24:14.991968   11896 start.go:246] waiting for cluster config update ...
	I0923 10:24:14.991993   11896 start.go:255] writing updated cluster config ...
	I0923 10:24:14.992266   11896 ssh_runner.go:195] Run: rm -f paused
	I0923 10:24:15.042846   11896 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:24:15.044785   11896 out.go:177] * Done! kubectl is now configured to use "addons-230451" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.892953239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087610892925107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519755,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd6355c8-27f8-430f-97da-ae2c365a35b4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.893559349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edc9c99f-486e-4885-bc84-f2e46d4f492a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.893633586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edc9c99f-486e-4885-bc84-f2e46d4f492a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.894417620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c15e3aa1748691a7e3248155d60803e2568a00d9a09e6cb7fcbbbbfde157d2a9,PodSandboxId:190812280e54af65cd1abf021128faeb5f67f356b8b7d72e9a93e380c8c4b39a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1727087609874949459,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 473a5fcc-1118-4412-8a07-a361ede815d2,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name
\":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28375705bb1d20ab311f2cf237fed01eabef8c0f23efa9debcd1b49a25528090,PodSandboxId:3a5fa05bdfad366130d002aa1a75d14505e7425af5db0c3ef7d33ec4353f62d2,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727087558210769217,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8f9de3ef-c28d-43df-a70a-b02891
b7f2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffed6da75362ea17f6f87c4358308d3bbbdddf3d5ef1aaeee4809eb4a35dad08,PodSandboxId:cfd11e7496705b0c5f0c75de62cdb105a33881bcfa68af2b32550a7580727a06,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727087555200954552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1be9563a-0099-4395-b271-6c07300521e9,},Ann
otations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd057f62deedb1e7880b892570758985c5f20164f6b29138f852f13942b03f2,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1727087054183413419,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e368
34-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d4cc430fcb53d6e63ca236a0defff5747a4df062bcd8344dbf66023c08ff66,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1727087052517964338,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c854ef0dc8e54f99da4bf6f575ea8853b4631bef9c88b55fe4d6c4b9dc11edd,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1727087050782100439,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b117631f6c75a6d42e1f2cffbd0f11c90a5f82c6196bbdbec127db471c6b3e9,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1727087049852929979,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hos
tpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:
1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9,PodSandboxId:82463f63435a78fe1403a783d6b2f2cf5669383376cc93f97a43df432d6089ce,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3
a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087042730349812,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-278z9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a3bdc91-4b2f-4273-a400-dfdbdebdceec,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881d0a81ef0aa56c527f2af4ada2216d638b32bc7e832ca4c5847bab9e4a3844,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8
8ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1727087042066613715,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442,PodSandboxId:f064a117f64ac916723e7cec4eb167829247e2c792fa6d8fc6a74f6ae453640c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727087040901025865,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-gggrn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70182994-4ec2-4cc8-a4b3-754d8223e9c5,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{
Id:30c2c3a6905e2e7c0a48bc2a96a2f5cf8cd183c93a52688dc8b4f6addcb18e21,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1727087032543544030,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d0341f5bed88db039284eea460775c121e1cde7c15565f487dacf06f3a7881,PodSandboxId:34d229e17206081fe48fa8de61f9b7993981534f644b4870690352f943f199a4,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1727087031069368201,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 651d7af5-c66c-4a47-a274-97f99744e66e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cffc82ac1f1f051eccf5e793114d94d1a3df3f10656b68827ce046ac04959e9e,PodSandboxId:112ca89b573f33d9cdf3278873518d2d7dcaecea1d5fb4ada4f011197c293c78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1727087029520853741,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215bba0a-54bf-45ec-a6cd-92f89ad62dac,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9521ef515f1d7bad2680349f45e6bd12de5763a4873c9cd0455477773abb383d,PodSandboxId:c026c0901169ed2909421fe889856fac4485e4ea669b6f6e122bc390d9418ab9,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727087027372594523,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-zc5h7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f9592b-9ae4-4ef5-aaeb-a421f92692bb,},Annotations:map[string]string{io.kubernetes.container.hash: b7d218
15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149,PodSandboxId:25adc288fa90499568a623cf8611ccbd69084fb34aa053fb1de9be25c9983a1c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087027257915121,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b7shb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ecb9137f-5ed1-4769-9925-b2c4998f0058,},Annotations:map[string]strin
g{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17457aa0d5ab22edbf55f6e95a0f8ddb6953799bfcb87cb1a5487b0f1956f332,PodSandboxId:23b66eb9eb57fab1e0edd1d47c356cacad733243ae5edec0067ac4c3a8a938fa,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727087027111912004,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-mtclj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 4d040c25-f747-448f-81e3-46dd810a9b80,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6,PodSandboxId:16e53975814fed6f48d35741a97d4b25b8ce55148b60e76dd6cddc67c4b1101f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1727087014797637653,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kwn7c,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: fab26ceb-8538-4146-9f14-955f715b3dd7,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172708700021049690
9,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f9e3c2168ad45dda5e990a9a2b57a6fd3f16958bc1ea0093b13ac69c4b429,PodSandboxId:443b481dfd0039da6ab68c82ca84f77f1a52b413cfc456c39ca8f2d551b877ee,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7
5ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727086998439877374,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-7z2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71f47a69-a374-4586-8d8b-0ec84aeee203,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a957022c41f5d0caed4b185ff8405d71bcd082dea64d8756fe7c9bef7bbcefe,PodSandboxId:783b44dbf17c92ef5d24724743b0e180e564826c357a43cf589fef8590c15894,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed52
96669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727086995259865579,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-r6tsf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53ab60ce-cc9d-4cfc-8ea7-0377211c4549,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf,PodSandboxId:8e1bfc24148a048b481995
d10e8cbe9ed74a018276888221526be8a71e5c7d20,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727086975843080298,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c962d61b-b651-40b4-b128-49b4f1966a46,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727086963270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edc9c99f-486e-4885-bc84-f2e46d4f492a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.939965367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f6f7b01-d8d1-4338-9634-9d8b6ff6f32e name=/runtime.v1.RuntimeService/Version
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.940037492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f6f7b01-d8d1-4338-9634-9d8b6ff6f32e name=/runtime.v1.RuntimeService/Version
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.941125561Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bae6eb1-a7d5-4d28-b604-7db9e1c70784 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.942228327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087610942199030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519755,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bae6eb1-a7d5-4d28-b604-7db9e1c70784 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.942802016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf1a5d94-b788-4d67-ae29-a9370d17ee3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.942865240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf1a5d94-b788-4d67-ae29-a9370d17ee3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.943508842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c15e3aa1748691a7e3248155d60803e2568a00d9a09e6cb7fcbbbbfde157d2a9,PodSandboxId:190812280e54af65cd1abf021128faeb5f67f356b8b7d72e9a93e380c8c4b39a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1727087609874949459,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 473a5fcc-1118-4412-8a07-a361ede815d2,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name
\":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28375705bb1d20ab311f2cf237fed01eabef8c0f23efa9debcd1b49a25528090,PodSandboxId:3a5fa05bdfad366130d002aa1a75d14505e7425af5db0c3ef7d33ec4353f62d2,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727087558210769217,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8f9de3ef-c28d-43df-a70a-b02891
b7f2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffed6da75362ea17f6f87c4358308d3bbbdddf3d5ef1aaeee4809eb4a35dad08,PodSandboxId:cfd11e7496705b0c5f0c75de62cdb105a33881bcfa68af2b32550a7580727a06,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727087555200954552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1be9563a-0099-4395-b271-6c07300521e9,},Ann
otations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd057f62deedb1e7880b892570758985c5f20164f6b29138f852f13942b03f2,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1727087054183413419,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e368
34-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d4cc430fcb53d6e63ca236a0defff5747a4df062bcd8344dbf66023c08ff66,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1727087052517964338,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c854ef0dc8e54f99da4bf6f575ea8853b4631bef9c88b55fe4d6c4b9dc11edd,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1727087050782100439,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b117631f6c75a6d42e1f2cffbd0f11c90a5f82c6196bbdbec127db471c6b3e9,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1727087049852929979,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hos
tpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:
1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9,PodSandboxId:82463f63435a78fe1403a783d6b2f2cf5669383376cc93f97a43df432d6089ce,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3
a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087042730349812,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-278z9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a3bdc91-4b2f-4273-a400-dfdbdebdceec,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881d0a81ef0aa56c527f2af4ada2216d638b32bc7e832ca4c5847bab9e4a3844,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8
8ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1727087042066613715,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442,PodSandboxId:f064a117f64ac916723e7cec4eb167829247e2c792fa6d8fc6a74f6ae453640c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727087040901025865,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-gggrn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70182994-4ec2-4cc8-a4b3-754d8223e9c5,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{
Id:30c2c3a6905e2e7c0a48bc2a96a2f5cf8cd183c93a52688dc8b4f6addcb18e21,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1727087032543544030,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d0341f5bed88db039284eea460775c121e1cde7c15565f487dacf06f3a7881,PodSandboxId:34d229e17206081fe48fa8de61f9b7993981534f644b4870690352f943f199a4,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1727087031069368201,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 651d7af5-c66c-4a47-a274-97f99744e66e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cffc82ac1f1f051eccf5e793114d94d1a3df3f10656b68827ce046ac04959e9e,PodSandboxId:112ca89b573f33d9cdf3278873518d2d7dcaecea1d5fb4ada4f011197c293c78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1727087029520853741,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215bba0a-54bf-45ec-a6cd-92f89ad62dac,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9521ef515f1d7bad2680349f45e6bd12de5763a4873c9cd0455477773abb383d,PodSandboxId:c026c0901169ed2909421fe889856fac4485e4ea669b6f6e122bc390d9418ab9,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727087027372594523,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-zc5h7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f9592b-9ae4-4ef5-aaeb-a421f92692bb,},Annotations:map[string]string{io.kubernetes.container.hash: b7d218
15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149,PodSandboxId:25adc288fa90499568a623cf8611ccbd69084fb34aa053fb1de9be25c9983a1c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087027257915121,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b7shb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ecb9137f-5ed1-4769-9925-b2c4998f0058,},Annotations:map[string]strin
g{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17457aa0d5ab22edbf55f6e95a0f8ddb6953799bfcb87cb1a5487b0f1956f332,PodSandboxId:23b66eb9eb57fab1e0edd1d47c356cacad733243ae5edec0067ac4c3a8a938fa,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727087027111912004,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-mtclj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 4d040c25-f747-448f-81e3-46dd810a9b80,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6,PodSandboxId:16e53975814fed6f48d35741a97d4b25b8ce55148b60e76dd6cddc67c4b1101f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1727087014797637653,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kwn7c,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: fab26ceb-8538-4146-9f14-955f715b3dd7,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172708700021049690
9,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f9e3c2168ad45dda5e990a9a2b57a6fd3f16958bc1ea0093b13ac69c4b429,PodSandboxId:443b481dfd0039da6ab68c82ca84f77f1a52b413cfc456c39ca8f2d551b877ee,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7
5ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727086998439877374,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-7z2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71f47a69-a374-4586-8d8b-0ec84aeee203,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a957022c41f5d0caed4b185ff8405d71bcd082dea64d8756fe7c9bef7bbcefe,PodSandboxId:783b44dbf17c92ef5d24724743b0e180e564826c357a43cf589fef8590c15894,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed52
96669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727086995259865579,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-r6tsf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53ab60ce-cc9d-4cfc-8ea7-0377211c4549,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf,PodSandboxId:8e1bfc24148a048b481995
d10e8cbe9ed74a018276888221526be8a71e5c7d20,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727086975843080298,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c962d61b-b651-40b4-b128-49b4f1966a46,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727086963270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf1a5d94-b788-4d67-ae29-a9370d17ee3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.988534810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54fbbce9-19f4-438d-8a9a-54ee7092c8c0 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.988604007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54fbbce9-19f4-438d-8a9a-54ee7092c8c0 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.989817136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13cd9427-ddfe-46bd-8c80-a364431acc7e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.990933036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087610990904063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519755,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13cd9427-ddfe-46bd-8c80-a364431acc7e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.991569917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ccadd2b-497b-4884-81ba-1aa0a1aa78db name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.991688889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ccadd2b-497b-4884-81ba-1aa0a1aa78db name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:30 addons-230451 crio[662]: time="2024-09-23 10:33:30.992283085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c15e3aa1748691a7e3248155d60803e2568a00d9a09e6cb7fcbbbbfde157d2a9,PodSandboxId:190812280e54af65cd1abf021128faeb5f67f356b8b7d72e9a93e380c8c4b39a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1727087609874949459,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 473a5fcc-1118-4412-8a07-a361ede815d2,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name
\":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28375705bb1d20ab311f2cf237fed01eabef8c0f23efa9debcd1b49a25528090,PodSandboxId:3a5fa05bdfad366130d002aa1a75d14505e7425af5db0c3ef7d33ec4353f62d2,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727087558210769217,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8f9de3ef-c28d-43df-a70a-b02891
b7f2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffed6da75362ea17f6f87c4358308d3bbbdddf3d5ef1aaeee4809eb4a35dad08,PodSandboxId:cfd11e7496705b0c5f0c75de62cdb105a33881bcfa68af2b32550a7580727a06,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727087555200954552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1be9563a-0099-4395-b271-6c07300521e9,},Ann
otations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd057f62deedb1e7880b892570758985c5f20164f6b29138f852f13942b03f2,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1727087054183413419,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e368
34-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d4cc430fcb53d6e63ca236a0defff5747a4df062bcd8344dbf66023c08ff66,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1727087052517964338,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c854ef0dc8e54f99da4bf6f575ea8853b4631bef9c88b55fe4d6c4b9dc11edd,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1727087050782100439,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b117631f6c75a6d42e1f2cffbd0f11c90a5f82c6196bbdbec127db471c6b3e9,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1727087049852929979,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hos
tpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:
1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9,PodSandboxId:82463f63435a78fe1403a783d6b2f2cf5669383376cc93f97a43df432d6089ce,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3
a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087042730349812,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-278z9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a3bdc91-4b2f-4273-a400-dfdbdebdceec,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881d0a81ef0aa56c527f2af4ada2216d638b32bc7e832ca4c5847bab9e4a3844,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8
8ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1727087042066613715,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442,PodSandboxId:f064a117f64ac916723e7cec4eb167829247e2c792fa6d8fc6a74f6ae453640c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727087040901025865,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-gggrn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70182994-4ec2-4cc8-a4b3-754d8223e9c5,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{
Id:30c2c3a6905e2e7c0a48bc2a96a2f5cf8cd183c93a52688dc8b4f6addcb18e21,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1727087032543544030,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d0341f5bed88db039284eea460775c121e1cde7c15565f487dacf06f3a7881,PodSandboxId:34d229e17206081fe48fa8de61f9b7993981534f644b4870690352f943f199a4,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1727087031069368201,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 651d7af5-c66c-4a47-a274-97f99744e66e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cffc82ac1f1f051eccf5e793114d94d1a3df3f10656b68827ce046ac04959e9e,PodSandboxId:112ca89b573f33d9cdf3278873518d2d7dcaecea1d5fb4ada4f011197c293c78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1727087029520853741,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215bba0a-54bf-45ec-a6cd-92f89ad62dac,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9521ef515f1d7bad2680349f45e6bd12de5763a4873c9cd0455477773abb383d,PodSandboxId:c026c0901169ed2909421fe889856fac4485e4ea669b6f6e122bc390d9418ab9,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727087027372594523,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-zc5h7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f9592b-9ae4-4ef5-aaeb-a421f92692bb,},Annotations:map[string]string{io.kubernetes.container.hash: b7d218
15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149,PodSandboxId:25adc288fa90499568a623cf8611ccbd69084fb34aa053fb1de9be25c9983a1c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087027257915121,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b7shb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ecb9137f-5ed1-4769-9925-b2c4998f0058,},Annotations:map[string]strin
g{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17457aa0d5ab22edbf55f6e95a0f8ddb6953799bfcb87cb1a5487b0f1956f332,PodSandboxId:23b66eb9eb57fab1e0edd1d47c356cacad733243ae5edec0067ac4c3a8a938fa,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727087027111912004,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-mtclj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 4d040c25-f747-448f-81e3-46dd810a9b80,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6,PodSandboxId:16e53975814fed6f48d35741a97d4b25b8ce55148b60e76dd6cddc67c4b1101f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1727087014797637653,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kwn7c,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: fab26ceb-8538-4146-9f14-955f715b3dd7,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172708700021049690
9,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f9e3c2168ad45dda5e990a9a2b57a6fd3f16958bc1ea0093b13ac69c4b429,PodSandboxId:443b481dfd0039da6ab68c82ca84f77f1a52b413cfc456c39ca8f2d551b877ee,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7
5ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727086998439877374,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-7z2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71f47a69-a374-4586-8d8b-0ec84aeee203,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a957022c41f5d0caed4b185ff8405d71bcd082dea64d8756fe7c9bef7bbcefe,PodSandboxId:783b44dbf17c92ef5d24724743b0e180e564826c357a43cf589fef8590c15894,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed52
96669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727086995259865579,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-r6tsf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53ab60ce-cc9d-4cfc-8ea7-0377211c4549,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf,PodSandboxId:8e1bfc24148a048b481995
d10e8cbe9ed74a018276888221526be8a71e5c7d20,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727086975843080298,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c962d61b-b651-40b4-b128-49b4f1966a46,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727086963270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ccadd2b-497b-4884-81ba-1aa0a1aa78db name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:31 addons-230451 crio[662]: time="2024-09-23 10:33:31.026466456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84778a17-5f2a-431a-838b-ae87c0476f10 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:33:31 addons-230451 crio[662]: time="2024-09-23 10:33:31.026538585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84778a17-5f2a-431a-838b-ae87c0476f10 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:33:31 addons-230451 crio[662]: time="2024-09-23 10:33:31.028150896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=731a346f-a55d-469b-8833-ffeaf0099a8c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:33:31 addons-230451 crio[662]: time="2024-09-23 10:33:31.029236071Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087611029205053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519755,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=731a346f-a55d-469b-8833-ffeaf0099a8c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:33:31 addons-230451 crio[662]: time="2024-09-23 10:33:31.029902911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff9224db-a5eb-4506-8420-99500e039bab name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:31 addons-230451 crio[662]: time="2024-09-23 10:33:31.030084705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff9224db-a5eb-4506-8420-99500e039bab name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:33:31 addons-230451 crio[662]: time="2024-09-23 10:33:31.030904057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c15e3aa1748691a7e3248155d60803e2568a00d9a09e6cb7fcbbbbfde157d2a9,PodSandboxId:190812280e54af65cd1abf021128faeb5f67f356b8b7d72e9a93e380c8c4b39a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1727087609874949459,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 473a5fcc-1118-4412-8a07-a361ede815d2,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name
\":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28375705bb1d20ab311f2cf237fed01eabef8c0f23efa9debcd1b49a25528090,PodSandboxId:3a5fa05bdfad366130d002aa1a75d14505e7425af5db0c3ef7d33ec4353f62d2,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727087558210769217,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8f9de3ef-c28d-43df-a70a-b02891
b7f2a4,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffed6da75362ea17f6f87c4358308d3bbbdddf3d5ef1aaeee4809eb4a35dad08,PodSandboxId:cfd11e7496705b0c5f0c75de62cdb105a33881bcfa68af2b32550a7580727a06,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727087555200954552,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1be9563a-0099-4395-b271-6c07300521e9,},Ann
otations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd057f62deedb1e7880b892570758985c5f20164f6b29138f852f13942b03f2,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1727087054183413419,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e368
34-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16d4cc430fcb53d6e63ca236a0defff5747a4df062bcd8344dbf66023c08ff66,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1727087052517964338,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c854ef0dc8e54f99da4bf6f575ea8853b4631bef9c88b55fe4d6c4b9dc11edd,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1727087050782100439,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b117631f6c75a6d42e1f2cffbd0f11c90a5f82c6196bbdbec127db471c6b3e9,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1727087049852929979,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hos
tpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:
1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9,PodSandboxId:82463f63435a78fe1403a783d6b2f2cf5669383376cc93f97a43df432d6089ce,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3
a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087042730349812,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-278z9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a3bdc91-4b2f-4273-a400-dfdbdebdceec,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881d0a81ef0aa56c527f2af4ada2216d638b32bc7e832ca4c5847bab9e4a3844,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8
8ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1727087042066613715,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442,PodSandboxId:f064a117f64ac916723e7cec4eb167829247e2c792fa6d8fc6a74f6ae453640c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727087040901025865,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-gggrn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70182994-4ec2-4cc8-a4b3-754d8223e9c5,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{
Id:30c2c3a6905e2e7c0a48bc2a96a2f5cf8cd183c93a52688dc8b4f6addcb18e21,PodSandboxId:5b069c3daf15ed59c24abbc972fe74f617f9fa9e227823efaf3fd1d383ada143,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1727087032543544030,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8mdng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e36834-e18e-4390-bb18-a360cde6394c,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02d0341f5bed88db039284eea460775c121e1cde7c15565f487dacf06f3a7881,PodSandboxId:34d229e17206081fe48fa8de61f9b7993981534f644b4870690352f943f199a4,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1727087031069368201,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 651d7af5-c66c-4a47-a274-97f99744e66e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cffc82ac1f1f051eccf5e793114d94d1a3df3f10656b68827ce046ac04959e9e,PodSandboxId:112ca89b573f33d9cdf3278873518d2d7dcaecea1d5fb4ada4f011197c293c78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1727087029520853741,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215bba0a-54bf-45ec-a6cd-92f89ad62dac,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9521ef515f1d7bad2680349f45e6bd12de5763a4873c9cd0455477773abb383d,PodSandboxId:c026c0901169ed2909421fe889856fac4485e4ea669b6f6e122bc390d9418ab9,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727087027372594523,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-zc5h7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f9592b-9ae4-4ef5-aaeb-a421f92692bb,},Annotations:map[string]string{io.kubernetes.container.hash: b7d218
15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149,PodSandboxId:25adc288fa90499568a623cf8611ccbd69084fb34aa053fb1de9be25c9983a1c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087027257915121,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b7shb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ecb9137f-5ed1-4769-9925-b2c4998f0058,},Annotations:map[string]strin
g{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17457aa0d5ab22edbf55f6e95a0f8ddb6953799bfcb87cb1a5487b0f1956f332,PodSandboxId:23b66eb9eb57fab1e0edd1d47c356cacad733243ae5edec0067ac4c3a8a938fa,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727087027111912004,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-mtclj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 4d040c25-f747-448f-81e3-46dd810a9b80,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6,PodSandboxId:16e53975814fed6f48d35741a97d4b25b8ce55148b60e76dd6cddc67c4b1101f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1727087014797637653,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kwn7c,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: fab26ceb-8538-4146-9f14-955f715b3dd7,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172708700021049690
9,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f9e3c2168ad45dda5e990a9a2b57a6fd3f16958bc1ea0093b13ac69c4b429,PodSandboxId:443b481dfd0039da6ab68c82ca84f77f1a52b413cfc456c39ca8f2d551b877ee,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7
5ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727086998439877374,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-7z2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71f47a69-a374-4586-8d8b-0ec84aeee203,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a957022c41f5d0caed4b185ff8405d71bcd082dea64d8756fe7c9bef7bbcefe,PodSandboxId:783b44dbf17c92ef5d24724743b0e180e564826c357a43cf589fef8590c15894,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed52
96669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1727086995259865579,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-r6tsf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 53ab60ce-cc9d-4cfc-8ea7-0377211c4549,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf,PodSandboxId:8e1bfc24148a048b481995
d10e8cbe9ed74a018276888221526be8a71e5c7d20,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727086975843080298,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c962d61b-b651-40b4-b128-49b4f1966a46,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727086963270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff9224db-a5eb-4506-8420-99500e039bab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c15e3aa174869       docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                                              1 second ago        Running             task-pv-container                        0                   190812280e54a       task-pv-pod-restore
	28375705bb1d2       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             52 seconds ago      Exited              helper-pod                               0                   3a5fa05bdfad3       helper-pod-delete-pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455
	ffed6da75362e       docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f                                            55 seconds ago      Exited              busybox                                  0                   cfd11e7496705       test-local-path
	efd057f62deed       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   5b069c3daf15e       csi-hostpathplugin-8mdng
	16d4cc430fcb5       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          9 minutes ago       Running             csi-provisioner                          0                   5b069c3daf15e       csi-hostpathplugin-8mdng
	7c854ef0dc8e5       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            9 minutes ago       Running             liveness-probe                           0                   5b069c3daf15e       csi-hostpathplugin-8mdng
	2b117631f6c75       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           9 minutes ago       Running             hostpath                                 0                   5b069c3daf15e       csi-hostpathplugin-8mdng
	63f8091f52d77       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 9 minutes ago       Running             gcp-auth                                 0                   7accadc369381       gcp-auth-89d5ffd79-r2dxj
	e06f961e39af1       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             9 minutes ago       Exited              patch                                    2                   82463f63435a7       ingress-nginx-admission-patch-278z9
	881d0a81ef0aa       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   5b069c3daf15e       csi-hostpathplugin-8mdng
	c1e529969cb93       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             9 minutes ago       Running             controller                               0                   f064a117f64ac       ingress-nginx-controller-bc57996ff-gggrn
	30c2c3a6905e2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   9 minutes ago       Running             csi-external-health-monitor-controller   0                   5b069c3daf15e       csi-hostpathplugin-8mdng
	02d0341f5bed8       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              9 minutes ago       Running             csi-resizer                              0                   34d229e172060       csi-hostpath-resizer-0
	cffc82ac1f1f0       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             9 minutes ago       Running             csi-attacher                             0                   112ca89b573f3       csi-hostpath-attacher-0
	9521ef515f1d7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   c026c0901169e       snapshot-controller-56fcc65765-zc5h7
	1b37183ea0c55       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   9 minutes ago       Exited              create                                   0                   25adc288fa904       ingress-nginx-admission-create-b7shb
	17457aa0d5ab2       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   23b66eb9eb57f       snapshot-controller-56fcc65765-mtclj
	992df9568fa60       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        10 minutes ago      Running             metrics-server                           0                   9b9a78bf3e3fb       metrics-server-84c5f94fbc-vx2z2
	4a957022c41f5       gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf                               10 minutes ago      Running             cloud-spanner-emulator                   0                   783b44dbf17c9       cloud-spanner-emulator-5b584cc74-r6tsf
	da7f78da32325       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             10 minutes ago      Running             minikube-ingress-dns                     0                   8e1bfc24148a0       kube-ingress-dns-minikube
	48b883a7cf210       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             10 minutes ago      Running             storage-provisioner                      0                   8f190e8711730       storage-provisioner
	6fed682ab380f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             10 minutes ago      Running             coredns                                  0                   248e92b5f5680       coredns-7c65d6cfc9-7mfbw
	6238ede2ce75e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             10 minutes ago      Running             kube-proxy                               0                   11212750411bf       kube-proxy-2f5tn
	9b030424709a2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             11 minutes ago      Running             kube-scheduler                           0                   45cd3db2a1e7a       kube-scheduler-addons-230451
	e428589b0fa5f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             11 minutes ago      Running             kube-controller-manager                  0                   5a2773265dbdc       kube-controller-manager-addons-230451
	455a0db0cbf9d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             11 minutes ago      Running             etcd                                     0                   48d959ccb4da3       etcd-addons-230451
	853b9960a36de       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             11 minutes ago      Running             kube-apiserver                           0                   35551829a0c35       kube-apiserver-addons-230451
	
	
	==> coredns [6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131] <==
	[INFO] 127.0.0.1:53719 - 30820 "HINFO IN 6685210372362929190.536412389867895458. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01361851s
	[INFO] 10.244.0.8:57781 - 24672 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0003346s
	[INFO] 10.244.0.8:57781 - 61805 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149843s
	[INFO] 10.244.0.8:51455 - 24269 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117247s
	[INFO] 10.244.0.8:51455 - 30147 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000132017s
	[INFO] 10.244.0.8:49756 - 27783 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008366s
	[INFO] 10.244.0.8:49756 - 27013 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096337s
	[INFO] 10.244.0.8:57401 - 50559 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000099583s
	[INFO] 10.244.0.8:57401 - 121 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000163833s
	[INFO] 10.244.0.8:41582 - 43809 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171459s
	[INFO] 10.244.0.8:41582 - 3879 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000206793s
	[INFO] 10.244.0.8:34747 - 26460 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006276s
	[INFO] 10.244.0.8:34747 - 25950 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029536s
	[INFO] 10.244.0.8:42596 - 15504 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050529s
	[INFO] 10.244.0.8:42596 - 29358 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049956s
	[INFO] 10.244.0.8:46828 - 21289 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000081739s
	[INFO] 10.244.0.8:46828 - 11311 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096602s
	[INFO] 10.244.0.21:47112 - 35978 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00044167s
	[INFO] 10.244.0.21:39898 - 22255 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008491s
	[INFO] 10.244.0.21:43466 - 53222 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131557s
	[INFO] 10.244.0.21:52335 - 61823 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159688s
	[INFO] 10.244.0.21:42381 - 33204 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118433s
	[INFO] 10.244.0.21:51980 - 28250 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104154s
	[INFO] 10.244.0.21:37226 - 50868 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00097457s
	[INFO] 10.244.0.21:35684 - 29625 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000645401s
	
	
	==> describe nodes <==
	Name:               addons-230451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-230451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-230451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_22_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-230451
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-230451"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:22:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-230451
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:33:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:33:06 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:33:06 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:33:06 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:33:06 +0000   Mon, 23 Sep 2024 10:22:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    addons-230451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 610d00e132ff4d0bb3d2f3caf1b3d48a
	  System UUID:                610d00e1-32ff-4d0b-b3d2-f3caf1b3d48a
	  Boot ID:                    ccc8674b-e396-46a3-bf38-22f6c0d79432
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-5b584cc74-r6tsf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  gcp-auth                    gcp-auth-89d5ffd79-r2dxj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-gggrn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-7mfbw                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-8mdng                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-230451                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-230451                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-230451       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-2f5tn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-230451                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-vx2z2             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-mtclj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-zc5h7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-230451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-230451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-230451 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-230451 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-230451 event: Registered Node addons-230451 in Controller
	
	
	==> dmesg <==
	[  +5.730136] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.158143] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.018430] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.266586] kauditd_printk_skb: 168 callbacks suppressed
	[  +6.035127] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:23] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.997386] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.219809] kauditd_printk_skb: 26 callbacks suppressed
	[ +20.523154] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.175400] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.134104] kauditd_printk_skb: 71 callbacks suppressed
	[Sep23 10:24] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.640337] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.746008] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.771381] kauditd_printk_skb: 45 callbacks suppressed
	[Sep23 10:25] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:27] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:29] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:32] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.410642] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.215645] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.744354] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.359012] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:33] kauditd_printk_skb: 2 callbacks suppressed
	[ +26.799993] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb] <==
	{"level":"info","ts":"2024-09-23T10:23:56.789725Z","caller":"traceutil/trace.go:171","msg":"trace[2052105943] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1046; }","duration":"386.955803ms","start":"2024-09-23T10:23:56.402762Z","end":"2024-09-23T10:23:56.789718Z","steps":["trace[2052105943] 'range keys from in-memory index tree'  (duration: 386.853751ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.789745Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.402719Z","time spent":"387.021008ms","remote":"127.0.0.1:56784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-23T10:23:56.789891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.104712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:56.789926Z","caller":"traceutil/trace.go:171","msg":"trace[1887252976] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1046; }","duration":"316.139111ms","start":"2024-09-23T10:23:56.473782Z","end":"2024-09-23T10:23:56.789921Z","steps":["trace[1887252976] 'range keys from in-memory index tree'  (duration: 316.059373ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.789943Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.473634Z","time spent":"316.304062ms","remote":"127.0.0.1:57028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-23T10:23:56.790488Z","caller":"traceutil/trace.go:171","msg":"trace[1993101087] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"300.658273ms","start":"2024-09-23T10:23:56.489821Z","end":"2024-09-23T10:23:56.790480Z","steps":["trace[1993101087] 'process raft request'  (duration: 297.906276ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.790623Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.489805Z","time spent":"300.723172ms","remote":"127.0.0.1:57094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" mod_revision:790 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" > >"}
	{"level":"info","ts":"2024-09-23T10:23:59.461550Z","caller":"traceutil/trace.go:171","msg":"trace[1713246877] linearizableReadLoop","detail":"{readStateIndex:1094; appliedIndex:1093; }","duration":"232.90659ms","start":"2024-09-23T10:23:59.228626Z","end":"2024-09-23T10:23:59.461533Z","steps":["trace[1713246877] 'read index received'  (duration: 231.853253ms)","trace[1713246877] 'applied index is now lower than readState.Index'  (duration: 1.052836ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:23:59.461773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.14172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:59.461821Z","caller":"traceutil/trace.go:171","msg":"trace[1810414376] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"233.215712ms","start":"2024-09-23T10:23:59.228599Z","end":"2024-09-23T10:23:59.461815Z","steps":["trace[1810414376] 'agreement among raft nodes before linearized reading'  (duration: 233.094125ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:23:59.461702Z","caller":"traceutil/trace.go:171","msg":"trace[1566092567] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"351.447386ms","start":"2024-09-23T10:23:59.110237Z","end":"2024-09-23T10:23:59.461684Z","steps":["trace[1566092567] 'process raft request'  (duration: 350.997358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:59.462122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.656543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:59.462168Z","caller":"traceutil/trace.go:171","msg":"trace[1196861560] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"154.708489ms","start":"2024-09-23T10:23:59.307453Z","end":"2024-09-23T10:23:59.462162Z","steps":["trace[1196861560] 'agreement among raft nodes before linearized reading'  (duration: 154.640705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:59.463122Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:59.110202Z","time spent":"351.753223ms","remote":"127.0.0.1:56906","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":699,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" mod_revision:1051 > success:<request_put:<key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" value_size:628 lease:839800514810162161 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" > >"}
	{"level":"info","ts":"2024-09-23T10:24:21.903648Z","caller":"traceutil/trace.go:171","msg":"trace[1089261884] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"329.698815ms","start":"2024-09-23T10:24:21.573933Z","end":"2024-09-23T10:24:21.903631Z","steps":["trace[1089261884] 'process raft request'  (duration: 329.594188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:24:21.903769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:24:21.573911Z","time spent":"329.789617ms","remote":"127.0.0.1:56998","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1190 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-23T10:32:22.866527Z","caller":"traceutil/trace.go:171","msg":"trace[1341451039] transaction","detail":"{read_only:false; response_revision:1943; number_of_response:1; }","duration":"135.103828ms","start":"2024-09-23T10:32:22.731398Z","end":"2024-09-23T10:32:22.866501Z","steps":["trace[1341451039] 'process raft request'  (duration: 134.961155ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:29.856569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-09-23T10:32:29.884999Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1510,"took":"27.805671ms","hash":3200741289,"current-db-size-bytes":6541312,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3637248,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T10:32:29.885056Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3200741289,"revision":1510,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T10:32:55.602366Z","caller":"traceutil/trace.go:171","msg":"trace[225191809] linearizableReadLoop","detail":"{readStateIndex:2316; appliedIndex:2315; }","duration":"126.227212ms","start":"2024-09-23T10:32:55.476064Z","end":"2024-09-23T10:32:55.602291Z","steps":["trace[225191809] 'read index received'  (duration: 126.065779ms)","trace[225191809] 'applied index is now lower than readState.Index'  (duration: 161.03µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:32:55.602562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.447391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:32:55.602588Z","caller":"traceutil/trace.go:171","msg":"trace[894733726] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2160; }","duration":"126.522421ms","start":"2024-09-23T10:32:55.476060Z","end":"2024-09-23T10:32:55.602582Z","steps":["trace[894733726] 'agreement among raft nodes before linearized reading'  (duration: 126.428208ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:55.602743Z","caller":"traceutil/trace.go:171","msg":"trace[43643442] transaction","detail":"{read_only:false; response_revision:2160; number_of_response:1; }","duration":"129.84545ms","start":"2024-09-23T10:32:55.472891Z","end":"2024-09-23T10:32:55.602737Z","steps":["trace[43643442] 'process raft request'  (duration: 129.312421ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:33:00.762090Z","caller":"traceutil/trace.go:171","msg":"trace[648338775] transaction","detail":"{read_only:false; response_revision:2169; number_of_response:1; }","duration":"288.031158ms","start":"2024-09-23T10:33:00.473384Z","end":"2024-09-23T10:33:00.761415Z","steps":["trace[648338775] 'process raft request'  (duration: 287.71469ms)"],"step_count":1}
	
	
	==> gcp-auth [63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b] <==
	2024/09/23 10:24:08 GCP Auth Webhook started!
	2024/09/23 10:24:15 Ready to marshal response ...
	2024/09/23 10:24:15 Ready to write response ...
	2024/09/23 10:24:15 Ready to marshal response ...
	2024/09/23 10:24:15 Ready to write response ...
	2024/09/23 10:24:15 Ready to marshal response ...
	2024/09/23 10:24:15 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:25 Ready to marshal response ...
	2024/09/23 10:32:25 Ready to write response ...
	2024/09/23 10:32:25 Ready to marshal response ...
	2024/09/23 10:32:25 Ready to write response ...
	2024/09/23 10:32:29 Ready to marshal response ...
	2024/09/23 10:32:29 Ready to write response ...
	2024/09/23 10:32:37 Ready to marshal response ...
	2024/09/23 10:32:37 Ready to write response ...
	2024/09/23 10:32:53 Ready to marshal response ...
	2024/09/23 10:32:53 Ready to write response ...
	2024/09/23 10:33:28 Ready to marshal response ...
	2024/09/23 10:33:28 Ready to write response ...
	
	
	==> kernel <==
	 10:33:31 up 11 min,  0 users,  load average: 0.49, 0.56, 0.48
	Linux addons-230451 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e] <==
	I0923 10:22:47.841803       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.149.131"}
	I0923 10:22:49.689944       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.59.26"}
	W0923 10:23:45.416820       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 10:23:45.416897       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0923 10:23:45.416959       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 10:23:45.417066       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0923 10:23:45.418116       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 10:23:45.418229       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0923 10:24:23.983995       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 10:24:23.984073       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0923 10:24:23.985620       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.69.103:443: connect: connection refused" logger="UnhandledError"
	E0923 10:24:23.987293       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.69.103:443: connect: connection refused" logger="UnhandledError"
	E0923 10:24:23.993204       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.69.103:443: connect: connection refused" logger="UnhandledError"
	I0923 10:24:24.062155       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 10:32:18.858064       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.199.8"}
	E0923 10:32:53.563750       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 10:33:08.344205       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 10:33:27.592624       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 10:33:28.618746       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780] <==
	I0923 10:24:26.015848       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0923 10:24:26.047779       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0923 10:24:28.007291       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0923 10:24:28.038242       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0923 10:24:37.075787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-230451"
	I0923 10:29:42.316407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-230451"
	I0923 10:32:18.947986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="56.108905ms"
	I0923 10:32:18.971019       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="22.95208ms"
	I0923 10:32:18.986882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="15.809631ms"
	I0923 10:32:18.987189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="46.574µs"
	I0923 10:32:24.785175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="5.253µs"
	I0923 10:32:25.276966       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="67.633µs"
	I0923 10:32:25.336812       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="22.668082ms"
	I0923 10:32:25.337091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="85.021µs"
	I0923 10:32:32.567105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="13.773µs"
	I0923 10:32:34.974668       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0923 10:32:35.981881       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-230451"
	I0923 10:32:38.352972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="13.18µs"
	I0923 10:32:42.765375       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0923 10:33:06.598941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-230451"
	I0923 10:33:26.018817       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	E0923 10:33:28.620537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:33:29.797068       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.927µs"
	W0923 10:33:30.001058       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:33:30.001113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 10:22:43.920909       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 10:22:44.021992       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.142"]
	E0923 10:22:44.022096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:22:45.319016       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 10:22:45.319081       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 10:22:45.319124       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:22:45.327775       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:22:45.328048       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:22:45.328078       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:22:45.345796       1 config.go:199] "Starting service config controller"
	I0923 10:22:45.345835       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:22:45.345866       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:22:45.345870       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:22:45.350777       1 config.go:328] "Starting node config controller"
	I0923 10:22:45.350807       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:22:45.446542       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:22:45.446598       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:22:45.450897       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe] <==
	W0923 10:22:31.294807       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:22:31.294862       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:22:32.090971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:22:32.091289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.095004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:22:32.095037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.148723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.148834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.209219       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:22:32.209362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.290354       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.290448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.370809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.370910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.393003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.393122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.446838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:22:32.446961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.464976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:32.465158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.550414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:22:32.550554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.715850       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:22:32.715995       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 10:22:34.754020       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:33:28 addons-230451 kubelet[1205]: I0923 10:33:28.324288    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd43da3e-c3a6-4889-933f-e3b234584151" containerName="task-pv-container"
	Sep 23 10:33:28 addons-230451 kubelet[1205]: I0923 10:33:28.324294    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41306b0-40aa-4b7e-b9f3-931550e87f01" containerName="gadget"
	Sep 23 10:33:28 addons-230451 kubelet[1205]: I0923 10:33:28.384625    1205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdssh\" (UniqueName: \"kubernetes.io/projected/473a5fcc-1118-4412-8a07-a361ede815d2-kube-api-access-hdssh\") pod \"task-pv-pod-restore\" (UID: \"473a5fcc-1118-4412-8a07-a361ede815d2\") " pod="default/task-pv-pod-restore"
	Sep 23 10:33:28 addons-230451 kubelet[1205]: I0923 10:33:28.384768    1205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e778c303-f79c-456b-97cc-439a8dcba505\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^44d0bd30-7997-11ef-ada0-e6c747f65339\") pod \"task-pv-pod-restore\" (UID: \"473a5fcc-1118-4412-8a07-a361ede815d2\") " pod="default/task-pv-pod-restore"
	Sep 23 10:33:28 addons-230451 kubelet[1205]: I0923 10:33:28.385033    1205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/473a5fcc-1118-4412-8a07-a361ede815d2-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"473a5fcc-1118-4412-8a07-a361ede815d2\") " pod="default/task-pv-pod-restore"
	Sep 23 10:33:28 addons-230451 kubelet[1205]: I0923 10:33:28.493600    1205 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e778c303-f79c-456b-97cc-439a8dcba505\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^44d0bd30-7997-11ef-ada0-e6c747f65339\") pod \"task-pv-pod-restore\" (UID: \"473a5fcc-1118-4412-8a07-a361ede815d2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/1772da54bb778a9e2b9cecdf23c827b49ea0da8ff92f6cd2396861b5f85fe434/globalmount\"" pod="default/task-pv-pod-restore"
	Sep 23 10:33:29 addons-230451 kubelet[1205]: I0923 10:33:29.395493    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/beb91492-378e-40c4-8664-867b2c4e7e24-gcp-creds\") pod \"beb91492-378e-40c4-8664-867b2c4e7e24\" (UID: \"beb91492-378e-40c4-8664-867b2c4e7e24\") "
	Sep 23 10:33:29 addons-230451 kubelet[1205]: I0923 10:33:29.395565    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs466\" (UniqueName: \"kubernetes.io/projected/beb91492-378e-40c4-8664-867b2c4e7e24-kube-api-access-vs466\") pod \"beb91492-378e-40c4-8664-867b2c4e7e24\" (UID: \"beb91492-378e-40c4-8664-867b2c4e7e24\") "
	Sep 23 10:33:29 addons-230451 kubelet[1205]: I0923 10:33:29.396226    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb91492-378e-40c4-8664-867b2c4e7e24-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "beb91492-378e-40c4-8664-867b2c4e7e24" (UID: "beb91492-378e-40c4-8664-867b2c4e7e24"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 10:33:29 addons-230451 kubelet[1205]: I0923 10:33:29.400600    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb91492-378e-40c4-8664-867b2c4e7e24-kube-api-access-vs466" (OuterVolumeSpecName: "kube-api-access-vs466") pod "beb91492-378e-40c4-8664-867b2c4e7e24" (UID: "beb91492-378e-40c4-8664-867b2c4e7e24"). InnerVolumeSpecName "kube-api-access-vs466". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:33:29 addons-230451 kubelet[1205]: I0923 10:33:29.496738    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vs466\" (UniqueName: \"kubernetes.io/projected/beb91492-378e-40c4-8664-867b2c4e7e24-kube-api-access-vs466\") on node \"addons-230451\" DevicePath \"\""
	Sep 23 10:33:29 addons-230451 kubelet[1205]: I0923 10:33:29.496791    1205 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/beb91492-378e-40c4-8664-867b2c4e7e24-gcp-creds\") on node \"addons-230451\" DevicePath \"\""
	Sep 23 10:33:29 addons-230451 kubelet[1205]: I0923 10:33:29.717828    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41306b0-40aa-4b7e-b9f3-931550e87f01" path="/var/lib/kubelet/pods/b41306b0-40aa-4b7e-b9f3-931550e87f01/volumes"
	Sep 23 10:33:30 addons-230451 kubelet[1205]: I0923 10:33:30.303493    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv92c\" (UniqueName: \"kubernetes.io/projected/71f47a69-a374-4586-8d8b-0ec84aeee203-kube-api-access-fv92c\") pod \"71f47a69-a374-4586-8d8b-0ec84aeee203\" (UID: \"71f47a69-a374-4586-8d8b-0ec84aeee203\") "
	Sep 23 10:33:30 addons-230451 kubelet[1205]: I0923 10:33:30.304271    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgmgg\" (UniqueName: \"kubernetes.io/projected/fab26ceb-8538-4146-9f14-955f715b3dd7-kube-api-access-xgmgg\") pod \"fab26ceb-8538-4146-9f14-955f715b3dd7\" (UID: \"fab26ceb-8538-4146-9f14-955f715b3dd7\") "
	Sep 23 10:33:30 addons-230451 kubelet[1205]: I0923 10:33:30.307549    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab26ceb-8538-4146-9f14-955f715b3dd7-kube-api-access-xgmgg" (OuterVolumeSpecName: "kube-api-access-xgmgg") pod "fab26ceb-8538-4146-9f14-955f715b3dd7" (UID: "fab26ceb-8538-4146-9f14-955f715b3dd7"). InnerVolumeSpecName "kube-api-access-xgmgg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:33:30 addons-230451 kubelet[1205]: I0923 10:33:30.309825    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f47a69-a374-4586-8d8b-0ec84aeee203-kube-api-access-fv92c" (OuterVolumeSpecName: "kube-api-access-fv92c") pod "71f47a69-a374-4586-8d8b-0ec84aeee203" (UID: "71f47a69-a374-4586-8d8b-0ec84aeee203"). InnerVolumeSpecName "kube-api-access-fv92c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:33:30 addons-230451 kubelet[1205]: I0923 10:33:30.405064    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xgmgg\" (UniqueName: \"kubernetes.io/projected/fab26ceb-8538-4146-9f14-955f715b3dd7-kube-api-access-xgmgg\") on node \"addons-230451\" DevicePath \"\""
	Sep 23 10:33:30 addons-230451 kubelet[1205]: I0923 10:33:30.405104    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fv92c\" (UniqueName: \"kubernetes.io/projected/71f47a69-a374-4586-8d8b-0ec84aeee203-kube-api-access-fv92c\") on node \"addons-230451\" DevicePath \"\""
	Sep 23 10:33:31 addons-230451 kubelet[1205]: I0923 10:33:31.083456    1205 scope.go:117] "RemoveContainer" containerID="f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6"
	Sep 23 10:33:31 addons-230451 kubelet[1205]: I0923 10:33:31.089426    1205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=2.077925593 podStartE2EDuration="3.089409897s" podCreationTimestamp="2024-09-23 10:33:28 +0000 UTC" firstStartedPulling="2024-09-23 10:33:28.833886267 +0000 UTC m=+655.259141956" lastFinishedPulling="2024-09-23 10:33:29.845370568 +0000 UTC m=+656.270626260" observedRunningTime="2024-09-23 10:33:31.087836322 +0000 UTC m=+657.513092031" watchObservedRunningTime="2024-09-23 10:33:31.089409897 +0000 UTC m=+657.514665605"
	Sep 23 10:33:31 addons-230451 kubelet[1205]: I0923 10:33:31.134468    1205 scope.go:117] "RemoveContainer" containerID="f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6"
	Sep 23 10:33:31 addons-230451 kubelet[1205]: E0923 10:33:31.134973    1205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6\": container with ID starting with f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6 not found: ID does not exist" containerID="f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6"
	Sep 23 10:33:31 addons-230451 kubelet[1205]: I0923 10:33:31.135009    1205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6"} err="failed to get container status \"f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6\": rpc error: code = NotFound desc = could not find container \"f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6\": container with ID starting with f45ac7a43d3a488a5c0131e8db081797aee27facdda66d6996f255dbd9e2eeb6 not found: ID does not exist"
	Sep 23 10:33:31 addons-230451 kubelet[1205]: I0923 10:33:31.135031    1205 scope.go:117] "RemoveContainer" containerID="1b4f9e3c2168ad45dda5e990a9a2b57a6fd3f16958bc1ea0093b13ac69c4b429"
	
	
	==> storage-provisioner [48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024] <==
	I0923 10:22:46.156565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:22:46.196845       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:22:46.202503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:22:46.219408       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:22:46.219529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9!
	I0923 10:22:46.220596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dfe369ce-2e58-4a81-9323-18883c63569e", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9 became leader
	I0923 10:22:46.321402       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-230451 -n addons-230451
helpers_test.go:261: (dbg) Run:  kubectl --context addons-230451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-b7shb ingress-nginx-admission-patch-278z9
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-230451 describe pod busybox ingress-nginx-admission-create-b7shb ingress-nginx-admission-patch-278z9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-230451 describe pod busybox ingress-nginx-admission-create-b7shb ingress-nginx-admission-patch-278z9: exit status 1 (63.926179ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-230451/192.168.39.142
	Start Time:       Mon, 23 Sep 2024 10:24:15 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ctzjs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ctzjs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-230451
	  Normal   Pulling    7m59s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m59s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m59s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m16s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b7shb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-278z9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-230451 describe pod busybox ingress-nginx-admission-create-b7shb ingress-nginx-admission-patch-278z9: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.27s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (155.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-230451 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-230451 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-230451 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5b95300c-41ad-4e8f-8edb-9269b715bfdc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5b95300c-41ad-4e8f-8edb-9269b715bfdc] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004356922s
I0923 10:33:43.836843   11139 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-230451 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.998342408s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-230451 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.142
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 addons disable ingress-dns --alsologtostderr -v=1: (1.597690607s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 addons disable ingress --alsologtostderr -v=1: (7.713249546s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-230451 -n addons-230451
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 logs -n 25: (1.252139038s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-056027                                                                     | download-only-056027 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-944972                                                                     | download-only-944972 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-056027                                                                     | download-only-056027 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-004546 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-004546                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34819                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-004546                                                                     | binary-mirror-004546 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-230451 --wait=true                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | -p addons-230451                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | -p addons-230451                                                                            |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-230451 ssh cat                                                                       | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | /opt/local-path-provisioner/pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| ip      | addons-230451 ip                                                                            | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-230451 addons                                                                        | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-230451 ssh curl -s                                                                   | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-230451 addons                                                                        | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-230451 ip                                                                            | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:54.509930   11896 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:54.510176   11896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:54.510185   11896 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:54.510189   11896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:54.510371   11896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:21:54.510927   11896 out.go:352] Setting JSON to false
	I0923 10:21:54.511749   11896 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":257,"bootTime":1727086657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:54.511839   11896 start.go:139] virtualization: kvm guest
	I0923 10:21:54.513820   11896 out.go:177] * [addons-230451] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:21:54.515097   11896 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:21:54.515105   11896 notify.go:220] Checking for updates...
	I0923 10:21:54.517574   11896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:54.518845   11896 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:21:54.519947   11896 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:54.520978   11896 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:21:54.521954   11896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:21:54.523196   11896 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:54.554453   11896 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 10:21:54.555559   11896 start.go:297] selected driver: kvm2
	I0923 10:21:54.555580   11896 start.go:901] validating driver "kvm2" against <nil>
	I0923 10:21:54.555601   11896 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:21:54.556616   11896 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:54.556711   11896 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 10:21:54.571291   11896 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 10:21:54.571371   11896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:54.571718   11896 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:54.571756   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:21:54.571824   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:21:54.571833   11896 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:54.571901   11896 start.go:340] cluster config:
	{Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:54.572023   11896 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:54.574799   11896 out.go:177] * Starting "addons-230451" primary control-plane node in "addons-230451" cluster
	I0923 10:21:54.575781   11896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:54.575828   11896 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:54.575840   11896 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:54.575908   11896 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:21:54.575919   11896 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:21:54.576245   11896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json ...
	I0923 10:21:54.576269   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json: {Name:mke557599469685c702152c654faebe5e1d076a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:54.576419   11896 start.go:360] acquireMachinesLock for addons-230451: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:21:54.576485   11896 start.go:364] duration metric: took 50.98µs to acquireMachinesLock for "addons-230451"
	I0923 10:21:54.576507   11896 start.go:93] Provisioning new machine with config: &{Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:21:54.576577   11896 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 10:21:54.577964   11896 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 10:21:54.578088   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:21:54.578137   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:21:54.592162   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0923 10:21:54.592680   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:21:54.593173   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:21:54.593196   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:21:54.593565   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:21:54.593723   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:21:54.593874   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:21:54.593988   11896 start.go:159] libmachine.API.Create for "addons-230451" (driver="kvm2")
	I0923 10:21:54.594024   11896 client.go:168] LocalClient.Create starting
	I0923 10:21:54.594063   11896 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:21:54.862234   11896 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:21:54.952456   11896 main.go:141] libmachine: Running pre-create checks...
	I0923 10:21:54.952476   11896 main.go:141] libmachine: (addons-230451) Calling .PreCreateCheck
	I0923 10:21:54.952976   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:21:54.953437   11896 main.go:141] libmachine: Creating machine...
	I0923 10:21:54.953450   11896 main.go:141] libmachine: (addons-230451) Calling .Create
	I0923 10:21:54.953678   11896 main.go:141] libmachine: (addons-230451) Creating KVM machine...
	I0923 10:21:54.954811   11896 main.go:141] libmachine: (addons-230451) DBG | found existing default KVM network
	I0923 10:21:54.955692   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:54.955529   11918 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0923 10:21:54.955752   11896 main.go:141] libmachine: (addons-230451) DBG | created network xml: 
	I0923 10:21:54.955775   11896 main.go:141] libmachine: (addons-230451) DBG | <network>
	I0923 10:21:54.955786   11896 main.go:141] libmachine: (addons-230451) DBG |   <name>mk-addons-230451</name>
	I0923 10:21:54.955801   11896 main.go:141] libmachine: (addons-230451) DBG |   <dns enable='no'/>
	I0923 10:21:54.955811   11896 main.go:141] libmachine: (addons-230451) DBG |   
	I0923 10:21:54.955821   11896 main.go:141] libmachine: (addons-230451) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 10:21:54.955831   11896 main.go:141] libmachine: (addons-230451) DBG |     <dhcp>
	I0923 10:21:54.955840   11896 main.go:141] libmachine: (addons-230451) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 10:21:54.955852   11896 main.go:141] libmachine: (addons-230451) DBG |     </dhcp>
	I0923 10:21:54.955859   11896 main.go:141] libmachine: (addons-230451) DBG |   </ip>
	I0923 10:21:54.955868   11896 main.go:141] libmachine: (addons-230451) DBG |   
	I0923 10:21:54.955876   11896 main.go:141] libmachine: (addons-230451) DBG | </network>
	I0923 10:21:54.955886   11896 main.go:141] libmachine: (addons-230451) DBG | 
	I0923 10:21:54.961052   11896 main.go:141] libmachine: (addons-230451) DBG | trying to create private KVM network mk-addons-230451 192.168.39.0/24...
	I0923 10:21:55.025203   11896 main.go:141] libmachine: (addons-230451) DBG | private KVM network mk-addons-230451 192.168.39.0/24 created
	I0923 10:21:55.025234   11896 main.go:141] libmachine: (addons-230451) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 ...
	I0923 10:21:55.025245   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.025189   11918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:55.025262   11896 main.go:141] libmachine: (addons-230451) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:21:55.025326   11896 main.go:141] libmachine: (addons-230451) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:21:55.288584   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.288456   11918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa...
	I0923 10:21:55.387986   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.387858   11918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/addons-230451.rawdisk...
	I0923 10:21:55.388016   11896 main.go:141] libmachine: (addons-230451) DBG | Writing magic tar header
	I0923 10:21:55.388026   11896 main.go:141] libmachine: (addons-230451) DBG | Writing SSH key tar header
	I0923 10:21:55.388034   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.387970   11918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 ...
	I0923 10:21:55.388050   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451
	I0923 10:21:55.388086   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 (perms=drwx------)
	I0923 10:21:55.388098   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:21:55.388113   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:21:55.388129   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:21:55.388139   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:21:55.388148   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:55.388154   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:21:55.388171   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:21:55.388180   11896 main.go:141] libmachine: (addons-230451) Creating domain...
	I0923 10:21:55.388192   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:21:55.388205   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:21:55.388216   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:21:55.388227   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home
	I0923 10:21:55.388234   11896 main.go:141] libmachine: (addons-230451) DBG | Skipping /home - not owner
	I0923 10:21:55.389182   11896 main.go:141] libmachine: (addons-230451) define libvirt domain using xml: 
	I0923 10:21:55.389204   11896 main.go:141] libmachine: (addons-230451) <domain type='kvm'>
	I0923 10:21:55.389213   11896 main.go:141] libmachine: (addons-230451)   <name>addons-230451</name>
	I0923 10:21:55.389220   11896 main.go:141] libmachine: (addons-230451)   <memory unit='MiB'>4000</memory>
	I0923 10:21:55.389228   11896 main.go:141] libmachine: (addons-230451)   <vcpu>2</vcpu>
	I0923 10:21:55.389238   11896 main.go:141] libmachine: (addons-230451)   <features>
	I0923 10:21:55.389248   11896 main.go:141] libmachine: (addons-230451)     <acpi/>
	I0923 10:21:55.389257   11896 main.go:141] libmachine: (addons-230451)     <apic/>
	I0923 10:21:55.389264   11896 main.go:141] libmachine: (addons-230451)     <pae/>
	I0923 10:21:55.389273   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389291   11896 main.go:141] libmachine: (addons-230451)   </features>
	I0923 10:21:55.389303   11896 main.go:141] libmachine: (addons-230451)   <cpu mode='host-passthrough'>
	I0923 10:21:55.389308   11896 main.go:141] libmachine: (addons-230451)   
	I0923 10:21:55.389313   11896 main.go:141] libmachine: (addons-230451)   </cpu>
	I0923 10:21:55.389318   11896 main.go:141] libmachine: (addons-230451)   <os>
	I0923 10:21:55.389337   11896 main.go:141] libmachine: (addons-230451)     <type>hvm</type>
	I0923 10:21:55.389348   11896 main.go:141] libmachine: (addons-230451)     <boot dev='cdrom'/>
	I0923 10:21:55.389352   11896 main.go:141] libmachine: (addons-230451)     <boot dev='hd'/>
	I0923 10:21:55.389359   11896 main.go:141] libmachine: (addons-230451)     <bootmenu enable='no'/>
	I0923 10:21:55.389363   11896 main.go:141] libmachine: (addons-230451)   </os>
	I0923 10:21:55.389464   11896 main.go:141] libmachine: (addons-230451)   <devices>
	I0923 10:21:55.389496   11896 main.go:141] libmachine: (addons-230451)     <disk type='file' device='cdrom'>
	I0923 10:21:55.389515   11896 main.go:141] libmachine: (addons-230451)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/boot2docker.iso'/>
	I0923 10:21:55.389532   11896 main.go:141] libmachine: (addons-230451)       <target dev='hdc' bus='scsi'/>
	I0923 10:21:55.389544   11896 main.go:141] libmachine: (addons-230451)       <readonly/>
	I0923 10:21:55.389553   11896 main.go:141] libmachine: (addons-230451)     </disk>
	I0923 10:21:55.389565   11896 main.go:141] libmachine: (addons-230451)     <disk type='file' device='disk'>
	I0923 10:21:55.389576   11896 main.go:141] libmachine: (addons-230451)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:21:55.389584   11896 main.go:141] libmachine: (addons-230451)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/addons-230451.rawdisk'/>
	I0923 10:21:55.389594   11896 main.go:141] libmachine: (addons-230451)       <target dev='hda' bus='virtio'/>
	I0923 10:21:55.389602   11896 main.go:141] libmachine: (addons-230451)     </disk>
	I0923 10:21:55.389616   11896 main.go:141] libmachine: (addons-230451)     <interface type='network'>
	I0923 10:21:55.389629   11896 main.go:141] libmachine: (addons-230451)       <source network='mk-addons-230451'/>
	I0923 10:21:55.389639   11896 main.go:141] libmachine: (addons-230451)       <model type='virtio'/>
	I0923 10:21:55.389648   11896 main.go:141] libmachine: (addons-230451)     </interface>
	I0923 10:21:55.389658   11896 main.go:141] libmachine: (addons-230451)     <interface type='network'>
	I0923 10:21:55.389669   11896 main.go:141] libmachine: (addons-230451)       <source network='default'/>
	I0923 10:21:55.389678   11896 main.go:141] libmachine: (addons-230451)       <model type='virtio'/>
	I0923 10:21:55.389684   11896 main.go:141] libmachine: (addons-230451)     </interface>
	I0923 10:21:55.389696   11896 main.go:141] libmachine: (addons-230451)     <serial type='pty'>
	I0923 10:21:55.389707   11896 main.go:141] libmachine: (addons-230451)       <target port='0'/>
	I0923 10:21:55.389716   11896 main.go:141] libmachine: (addons-230451)     </serial>
	I0923 10:21:55.389725   11896 main.go:141] libmachine: (addons-230451)     <console type='pty'>
	I0923 10:21:55.389735   11896 main.go:141] libmachine: (addons-230451)       <target type='serial' port='0'/>
	I0923 10:21:55.389746   11896 main.go:141] libmachine: (addons-230451)     </console>
	I0923 10:21:55.389753   11896 main.go:141] libmachine: (addons-230451)     <rng model='virtio'>
	I0923 10:21:55.389772   11896 main.go:141] libmachine: (addons-230451)       <backend model='random'>/dev/random</backend>
	I0923 10:21:55.389789   11896 main.go:141] libmachine: (addons-230451)     </rng>
	I0923 10:21:55.389804   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389813   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389825   11896 main.go:141] libmachine: (addons-230451)   </devices>
	I0923 10:21:55.389833   11896 main.go:141] libmachine: (addons-230451) </domain>
	I0923 10:21:55.389840   11896 main.go:141] libmachine: (addons-230451) 
	I0923 10:21:55.442274   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:1e:65:9c in network default
	I0923 10:21:55.442896   11896 main.go:141] libmachine: (addons-230451) Ensuring networks are active...
	I0923 10:21:55.442919   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:55.443620   11896 main.go:141] libmachine: (addons-230451) Ensuring network default is active
	I0923 10:21:55.443936   11896 main.go:141] libmachine: (addons-230451) Ensuring network mk-addons-230451 is active
	I0923 10:21:55.444473   11896 main.go:141] libmachine: (addons-230451) Getting domain xml...
	I0923 10:21:55.445327   11896 main.go:141] libmachine: (addons-230451) Creating domain...
	I0923 10:21:57.016016   11896 main.go:141] libmachine: (addons-230451) Waiting to get IP...
	I0923 10:21:57.016667   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.017033   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.017054   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.017010   11918 retry.go:31] will retry after 208.635315ms: waiting for machine to come up
	I0923 10:21:57.227392   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.227733   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.227756   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.227648   11918 retry.go:31] will retry after 297.216389ms: waiting for machine to come up
	I0923 10:21:57.526245   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.526673   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.526694   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.526643   11918 retry.go:31] will retry after 293.828552ms: waiting for machine to come up
	I0923 10:21:57.822073   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.822442   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.822463   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.822410   11918 retry.go:31] will retry after 602.044959ms: waiting for machine to come up
	I0923 10:21:58.425996   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:58.426504   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:58.426525   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:58.426453   11918 retry.go:31] will retry after 610.746842ms: waiting for machine to come up
	I0923 10:21:59.039341   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:59.039865   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:59.039886   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:59.039817   11918 retry.go:31] will retry after 688.678666ms: waiting for machine to come up
	I0923 10:21:59.730224   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:59.730635   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:59.730660   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:59.730596   11918 retry.go:31] will retry after 1.028645485s: waiting for machine to come up
	I0923 10:22:00.760735   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:00.761163   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:00.761193   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:00.761110   11918 retry.go:31] will retry after 973.08502ms: waiting for machine to come up
	I0923 10:22:01.735437   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:01.735826   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:01.735858   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:01.735768   11918 retry.go:31] will retry after 1.395648774s: waiting for machine to come up
	I0923 10:22:03.134422   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:03.134826   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:03.134854   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:03.134760   11918 retry.go:31] will retry after 1.707966873s: waiting for machine to come up
	I0923 10:22:04.844605   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:04.845022   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:04.845045   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:04.844996   11918 retry.go:31] will retry after 2.702470731s: waiting for machine to come up
	I0923 10:22:07.550535   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:07.550864   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:07.550880   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:07.550829   11918 retry.go:31] will retry after 2.889295682s: waiting for machine to come up
	I0923 10:22:10.441287   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:10.441659   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:10.441679   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:10.441632   11918 retry.go:31] will retry after 2.869623302s: waiting for machine to come up
	I0923 10:22:13.314625   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:13.315023   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:13.315045   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:13.314983   11918 retry.go:31] will retry after 3.640221936s: waiting for machine to come up
	I0923 10:22:16.958659   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:16.959119   11896 main.go:141] libmachine: (addons-230451) Found IP for machine: 192.168.39.142
	I0923 10:22:16.959156   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has current primary IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:16.959166   11896 main.go:141] libmachine: (addons-230451) Reserving static IP address...
	I0923 10:22:16.959462   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find host DHCP lease matching {name: "addons-230451", mac: "52:54:00:23:7b:36", ip: "192.168.39.142"} in network mk-addons-230451
	I0923 10:22:17.029441   11896 main.go:141] libmachine: (addons-230451) DBG | Getting to WaitForSSH function...
	I0923 10:22:17.029468   11896 main.go:141] libmachine: (addons-230451) Reserved static IP address: 192.168.39.142
	I0923 10:22:17.029481   11896 main.go:141] libmachine: (addons-230451) Waiting for SSH to be available...
	I0923 10:22:17.031574   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.031976   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.032008   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.032179   11896 main.go:141] libmachine: (addons-230451) DBG | Using SSH client type: external
	I0923 10:22:17.032208   11896 main.go:141] libmachine: (addons-230451) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa (-rw-------)
	I0923 10:22:17.032242   11896 main.go:141] libmachine: (addons-230451) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:22:17.032261   11896 main.go:141] libmachine: (addons-230451) DBG | About to run SSH command:
	I0923 10:22:17.032275   11896 main.go:141] libmachine: (addons-230451) DBG | exit 0
	I0923 10:22:17.165353   11896 main.go:141] libmachine: (addons-230451) DBG | SSH cmd err, output: <nil>: 
	I0923 10:22:17.165603   11896 main.go:141] libmachine: (addons-230451) KVM machine creation complete!
	I0923 10:22:17.165853   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:22:17.166404   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:17.166615   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:17.166760   11896 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:22:17.166775   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:17.167984   11896 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:22:17.167997   11896 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:22:17.168002   11896 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:22:17.168007   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.170262   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.170628   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.170654   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.170753   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.170943   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.171091   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.171216   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.171352   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.171523   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.171532   11896 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:22:17.276650   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:17.276675   11896 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:22:17.276682   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.279238   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.279568   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.279618   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.279725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.279902   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.280049   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.280188   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.280328   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.280526   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.280539   11896 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:22:17.390222   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:22:17.390295   11896 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:22:17.390302   11896 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:22:17.390309   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.390534   11896 buildroot.go:166] provisioning hostname "addons-230451"
	I0923 10:22:17.390564   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.390733   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.393254   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.393637   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.393661   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.393806   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.393974   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.394097   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.394266   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.394503   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.394674   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.394685   11896 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-230451 && echo "addons-230451" | sudo tee /etc/hostname
	I0923 10:22:17.515225   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-230451
	
	I0923 10:22:17.515256   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.517989   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.518336   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.518363   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.518538   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.518711   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.518849   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.518973   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.519103   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.519305   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.519322   11896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-230451' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-230451/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-230451' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:22:17.634431   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:17.634459   11896 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:22:17.634507   11896 buildroot.go:174] setting up certificates
	I0923 10:22:17.634531   11896 provision.go:84] configureAuth start
	I0923 10:22:17.634546   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.634804   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:17.637289   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.637645   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.637672   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.637796   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.639619   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.639935   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.639958   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.640107   11896 provision.go:143] copyHostCerts
	I0923 10:22:17.640166   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:22:17.640266   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:22:17.640357   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:22:17.640412   11896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.addons-230451 san=[127.0.0.1 192.168.39.142 addons-230451 localhost minikube]
	I0923 10:22:17.714679   11896 provision.go:177] copyRemoteCerts
	I0923 10:22:17.714730   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:22:17.714753   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.717181   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.717480   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.717505   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.717645   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.717825   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.717941   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.718046   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:17.804191   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:22:17.829062   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:22:17.853034   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 10:22:17.877800   11896 provision.go:87] duration metric: took 243.235441ms to configureAuth
	I0923 10:22:17.877829   11896 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:22:17.877983   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:17.878058   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.880387   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.880814   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.880840   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.881030   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.881209   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.881361   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.881549   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.881728   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.881938   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.881960   11896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:22:18.112582   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:22:18.112611   11896 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:22:18.112619   11896 main.go:141] libmachine: (addons-230451) Calling .GetURL
	I0923 10:22:18.114015   11896 main.go:141] libmachine: (addons-230451) DBG | Using libvirt version 6000000
	I0923 10:22:18.115892   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.116172   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.116200   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.116375   11896 main.go:141] libmachine: Docker is up and running!
	I0923 10:22:18.116385   11896 main.go:141] libmachine: Reticulating splines...
	I0923 10:22:18.116393   11896 client.go:171] duration metric: took 23.522358813s to LocalClient.Create
	I0923 10:22:18.116418   11896 start.go:167] duration metric: took 23.522430116s to libmachine.API.Create "addons-230451"
	I0923 10:22:18.116432   11896 start.go:293] postStartSetup for "addons-230451" (driver="kvm2")
	I0923 10:22:18.116444   11896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:22:18.116465   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.116705   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:22:18.116725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.118667   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.118943   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.118966   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.119088   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.119236   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.119375   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.119475   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.203671   11896 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:22:18.207849   11896 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:22:18.207881   11896 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:22:18.207965   11896 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:22:18.208002   11896 start.go:296] duration metric: took 91.564102ms for postStartSetup
	I0923 10:22:18.208041   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:22:18.208600   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:18.210821   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.211132   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.211160   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.211370   11896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json ...
	I0923 10:22:18.211568   11896 start.go:128] duration metric: took 23.634978913s to createHost
	I0923 10:22:18.211597   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.213764   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.214103   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.214126   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.214261   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.214411   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.214520   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.214653   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.214811   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:18.214999   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:18.215010   11896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:22:18.322271   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727086938.296352149
	
	I0923 10:22:18.322297   11896 fix.go:216] guest clock: 1727086938.296352149
	I0923 10:22:18.322306   11896 fix.go:229] Guest: 2024-09-23 10:22:18.296352149 +0000 UTC Remote: 2024-09-23 10:22:18.211580004 +0000 UTC m=+23.734217766 (delta=84.772145ms)
	I0923 10:22:18.322326   11896 fix.go:200] guest clock delta is within tolerance: 84.772145ms
	I0923 10:22:18.322330   11896 start.go:83] releasing machines lock for "addons-230451", held for 23.74583569s
	I0923 10:22:18.322350   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.322592   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:18.325284   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.325621   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.325666   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.325767   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326263   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326436   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326529   11896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:22:18.326593   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.326632   11896 ssh_runner.go:195] Run: cat /version.json
	I0923 10:22:18.326655   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.329047   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329309   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329394   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.329418   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329575   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.329694   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.329721   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.329853   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.329920   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.329983   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.330068   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.330292   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.330417   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.438062   11896 ssh_runner.go:195] Run: systemctl --version
	I0923 10:22:18.444025   11896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:22:18.601874   11896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:22:18.607742   11896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:22:18.607802   11896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:18.624264   11896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:22:18.624289   11896 start.go:495] detecting cgroup driver to use...
	I0923 10:22:18.624345   11896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:22:18.639564   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:22:18.653568   11896 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:22:18.653621   11896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:22:18.667712   11896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:22:18.681874   11896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:22:18.792202   11896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:22:18.925990   11896 docker.go:233] disabling docker service ...
	I0923 10:22:18.926064   11896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:22:18.940378   11896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:22:18.953192   11896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:22:19.087815   11896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:22:19.203155   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:22:19.216978   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:22:19.235019   11896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:22:19.235096   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.245714   11896 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:22:19.245818   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.256490   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.267602   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.278326   11896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:22:19.289301   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.299699   11896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.317469   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.328378   11896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:22:19.338564   11896 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:22:19.338621   11896 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:22:19.352191   11896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:22:19.362359   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:19.484977   11896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:22:19.579332   11896 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:22:19.579411   11896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:22:19.584157   11896 start.go:563] Will wait 60s for crictl version
	I0923 10:22:19.584218   11896 ssh_runner.go:195] Run: which crictl
	I0923 10:22:19.587946   11896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:22:19.628720   11896 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:22:19.628857   11896 ssh_runner.go:195] Run: crio --version
	I0923 10:22:19.657600   11896 ssh_runner.go:195] Run: crio --version
	I0923 10:22:19.690821   11896 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:22:19.692029   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:19.694415   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:19.694719   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:19.694755   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:19.694901   11896 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:22:19.698798   11896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:19.711452   11896 kubeadm.go:883] updating cluster {Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:22:19.711550   11896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:19.711592   11896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:19.747339   11896 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 10:22:19.747410   11896 ssh_runner.go:195] Run: which lz4
	I0923 10:22:19.751336   11896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 10:22:19.755656   11896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 10:22:19.755687   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 10:22:21.047377   11896 crio.go:462] duration metric: took 1.296092639s to copy over tarball
	I0923 10:22:21.047452   11896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 10:22:23.149022   11896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.101536224s)
	I0923 10:22:23.149063   11896 crio.go:469] duration metric: took 2.101658311s to extract the tarball
	I0923 10:22:23.149074   11896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 10:22:23.186090   11896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:23.231874   11896 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:23.231895   11896 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:22:23.231902   11896 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.31.1 crio true true} ...
	I0923 10:22:23.231987   11896 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-230451 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:22:23.232047   11896 ssh_runner.go:195] Run: crio config
	I0923 10:22:23.284759   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:22:23.284784   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:22:23.284800   11896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:22:23.284832   11896 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-230451 NodeName:addons-230451 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:22:23.284967   11896 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-230451"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:22:23.285038   11896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:22:23.294894   11896 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:22:23.294968   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:22:23.304559   11896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 10:22:23.321682   11896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:22:23.338467   11896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 10:22:23.355102   11896 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0923 10:22:23.359077   11896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:23.371614   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:23.497716   11896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:23.524962   11896 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451 for IP: 192.168.39.142
	I0923 10:22:23.524985   11896 certs.go:194] generating shared ca certs ...
	I0923 10:22:23.525001   11896 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.525125   11896 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:22:23.653794   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt ...
	I0923 10:22:23.653826   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt: {Name:mk0d92c2a9963fcf15ffb070721c588192e7736e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.653986   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key ...
	I0923 10:22:23.653996   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key: {Name:mkeb4e4ef8ef3c516f46598d48867c8293e2d97b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.654085   11896 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:22:23.786686   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt ...
	I0923 10:22:23.786718   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt: {Name:mk4094838d6b10d87fe353fc7ecb8f6c0f591232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.786881   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key ...
	I0923 10:22:23.786892   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key: {Name:mkae41c92d5aff93d9eaa4a90706202e465fd08d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.786960   11896 certs.go:256] generating profile certs ...
	I0923 10:22:23.787011   11896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key
	I0923 10:22:23.787024   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt with IP's: []
	I0923 10:22:24.040672   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt ...
	I0923 10:22:24.040705   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: {Name:mk12ca8a37f255852c15957acdaaac5803f6db08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.040873   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key ...
	I0923 10:22:24.040883   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key: {Name:mk5ec5d734cc6123b964d4a8aa27ee9625037ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.040949   11896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89
	I0923 10:22:24.040966   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.142]
	I0923 10:22:24.248598   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 ...
	I0923 10:22:24.248628   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89: {Name:mk9332743467473c4d78e8a673a2ddc310d8086b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.248782   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89 ...
	I0923 10:22:24.248794   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89: {Name:mk563d416f16b853b493dbf6317b9fb699d8141e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.248878   11896 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt
	I0923 10:22:24.248949   11896 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key
	I0923 10:22:24.248994   11896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key
	I0923 10:22:24.249010   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt with IP's: []
	I0923 10:22:24.333105   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt ...
	I0923 10:22:24.333135   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt: {Name:mk1c36ccdfe89e6949c41221860582d71d9abecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.333299   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key ...
	I0923 10:22:24.333309   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key: {Name:mk001f630ca2a3ebb6948b9fe6cbe0a137191074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.333516   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:22:24.333586   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:22:24.333624   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:22:24.333649   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:22:24.334174   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:22:24.364904   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:22:24.389692   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:22:24.413480   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:22:24.437332   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:22:24.463620   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:22:24.489652   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:22:24.515979   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:22:24.542229   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:22:24.568853   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:22:24.589287   11896 ssh_runner.go:195] Run: openssl version
	I0923 10:22:24.596782   11896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:22:24.607940   11896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.612566   11896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.612615   11896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.618835   11896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:22:24.629990   11896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:22:24.634389   11896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:22:24.634449   11896 kubeadm.go:392] StartCluster: {Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:22:24.634545   11896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:22:24.634624   11896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:22:24.674296   11896 cri.go:89] found id: ""
	I0923 10:22:24.674376   11896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:22:24.684623   11896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:22:24.695036   11896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:22:24.707226   11896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:22:24.707249   11896 kubeadm.go:157] found existing configuration files:
	
	I0923 10:22:24.707293   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:22:24.716855   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:22:24.716917   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:22:24.727043   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:22:24.736874   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:22:24.736946   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:22:24.746697   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:22:24.756313   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:22:24.756377   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:22:24.766227   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:22:24.775698   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:22:24.775768   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:22:24.786611   11896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:22:24.838767   11896 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:22:24.838821   11896 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:22:24.940902   11896 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:22:24.941087   11896 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:22:24.941212   11896 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:22:24.948875   11896 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:22:25.257696   11896 out.go:235]   - Generating certificates and keys ...
	I0923 10:22:25.257801   11896 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:22:25.257881   11896 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:22:25.257985   11896 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:22:25.258096   11896 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:22:25.363288   11896 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:22:25.425568   11896 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:22:25.496334   11896 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:22:25.496516   11896 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-230451 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0923 10:22:25.661761   11896 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:22:25.661907   11896 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-230451 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0923 10:22:25.727123   11896 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:22:25.906579   11896 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:22:25.974535   11896 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:22:25.974623   11896 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:22:26.123945   11896 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:22:26.269690   11896 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:22:26.518592   11896 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:22:26.597902   11896 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:22:26.831627   11896 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:22:26.832272   11896 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:22:26.836780   11896 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:22:26.838584   11896 out.go:235]   - Booting up control plane ...
	I0923 10:22:26.838682   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:22:26.838755   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:22:26.839231   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:22:26.853944   11896 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:22:26.861028   11896 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:22:26.861120   11896 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:22:26.983148   11896 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:22:26.983286   11896 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:22:27.483290   11896 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.847264ms
	I0923 10:22:27.483400   11896 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:22:32.981821   11896 kubeadm.go:310] [api-check] The API server is healthy after 5.502127762s
	I0923 10:22:32.994814   11896 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:22:33.013765   11896 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:22:33.046425   11896 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:22:33.046697   11896 kubeadm.go:310] [mark-control-plane] Marking the node addons-230451 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:22:33.059414   11896 kubeadm.go:310] [bootstrap-token] Using token: 2hvssy.27mbk5fz3uxysew6
	I0923 10:22:33.060728   11896 out.go:235]   - Configuring RBAC rules ...
	I0923 10:22:33.060856   11896 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:22:33.066668   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:22:33.078485   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:22:33.081626   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:22:33.087430   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:22:33.091457   11896 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:22:33.390136   11896 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:22:33.813952   11896 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:22:34.387868   11896 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:22:34.388882   11896 kubeadm.go:310] 
	I0923 10:22:34.388988   11896 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:22:34.388998   11896 kubeadm.go:310] 
	I0923 10:22:34.389127   11896 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:22:34.389143   11896 kubeadm.go:310] 
	I0923 10:22:34.389170   11896 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:22:34.389244   11896 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:22:34.389326   11896 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:22:34.389341   11896 kubeadm.go:310] 
	I0923 10:22:34.389420   11896 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:22:34.389431   11896 kubeadm.go:310] 
	I0923 10:22:34.389498   11896 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:22:34.389516   11896 kubeadm.go:310] 
	I0923 10:22:34.389562   11896 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:22:34.389676   11896 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:22:34.389782   11896 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:22:34.389792   11896 kubeadm.go:310] 
	I0923 10:22:34.389900   11896 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:22:34.389993   11896 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:22:34.390002   11896 kubeadm.go:310] 
	I0923 10:22:34.390104   11896 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hvssy.27mbk5fz3uxysew6 \
	I0923 10:22:34.390230   11896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 \
	I0923 10:22:34.390260   11896 kubeadm.go:310] 	--control-plane 
	I0923 10:22:34.390266   11896 kubeadm.go:310] 
	I0923 10:22:34.390390   11896 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:22:34.390400   11896 kubeadm.go:310] 
	I0923 10:22:34.390516   11896 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hvssy.27mbk5fz3uxysew6 \
	I0923 10:22:34.390643   11896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 
	I0923 10:22:34.391299   11896 kubeadm.go:310] W0923 10:22:24.818359     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:34.391630   11896 kubeadm.go:310] W0923 10:22:24.819029     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:34.391761   11896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:22:34.391794   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:22:34.391806   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:22:34.393547   11896 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:22:34.394830   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:22:34.412319   11896 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:22:34.431070   11896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:22:34.431130   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:34.431136   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-230451 minikube.k8s.io/updated_at=2024_09_23T10_22_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-230451 minikube.k8s.io/primary=true
	I0923 10:22:34.546608   11896 ops.go:34] apiserver oom_adj: -16
	I0923 10:22:34.546625   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:35.047328   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:35.546823   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:36.046794   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:36.547056   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:37.046889   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:37.547633   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:38.046761   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:38.547665   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:39.047581   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:39.133362   11896 kubeadm.go:1113] duration metric: took 4.702301784s to wait for elevateKubeSystemPrivileges
	I0923 10:22:39.133409   11896 kubeadm.go:394] duration metric: took 14.498964743s to StartCluster
	I0923 10:22:39.133426   11896 settings.go:142] acquiring lock: {Name:mka0fc37129eef8f35af2c1a6ddc567156410b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:39.133569   11896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:22:39.133997   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/kubeconfig: {Name:mk40a9897a5577a89be748f874c2066abd769fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:39.134254   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:22:39.134262   11896 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:22:39.134340   11896 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:22:39.134490   11896 addons.go:69] Setting yakd=true in profile "addons-230451"
	I0923 10:22:39.134508   11896 addons.go:234] Setting addon yakd=true in "addons-230451"
	I0923 10:22:39.134521   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:39.134537   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134577   11896 addons.go:69] Setting inspektor-gadget=true in profile "addons-230451"
	I0923 10:22:39.134590   11896 addons.go:234] Setting addon inspektor-gadget=true in "addons-230451"
	I0923 10:22:39.134616   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134702   11896 addons.go:69] Setting storage-provisioner=true in profile "addons-230451"
	I0923 10:22:39.134726   11896 addons.go:234] Setting addon storage-provisioner=true in "addons-230451"
	I0923 10:22:39.134749   11896 addons.go:69] Setting registry=true in profile "addons-230451"
	I0923 10:22:39.135058   11896 addons.go:234] Setting addon registry=true in "addons-230451"
	I0923 10:22:39.135093   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134729   11896 addons.go:69] Setting cloud-spanner=true in profile "addons-230451"
	I0923 10:22:39.134732   11896 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-230451"
	I0923 10:22:39.135178   11896 addons.go:69] Setting volcano=true in profile "addons-230451"
	I0923 10:22:39.135163   11896 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-230451"
	I0923 10:22:39.135195   11896 addons.go:234] Setting addon volcano=true in "addons-230451"
	I0923 10:22:39.135209   11896 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-230451"
	I0923 10:22:39.135225   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135226   11896 addons.go:69] Setting volumesnapshots=true in profile "addons-230451"
	I0923 10:22:39.135243   11896 addons.go:234] Setting addon volumesnapshots=true in "addons-230451"
	I0923 10:22:39.135269   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134757   11896 addons.go:69] Setting metrics-server=true in profile "addons-230451"
	I0923 10:22:39.135294   11896 addons.go:234] Setting addon metrics-server=true in "addons-230451"
	I0923 10:22:39.135313   11896 addons.go:234] Setting addon cloud-spanner=true in "addons-230451"
	I0923 10:22:39.135037   11896 addons.go:69] Setting default-storageclass=true in profile "addons-230451"
	I0923 10:22:39.135326   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135334   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135346   11896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-230451"
	I0923 10:22:39.135361   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135745   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.135322   11896 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-230451"
	I0923 10:22:39.135770   11896 addons.go:69] Setting ingress-dns=true in profile "addons-230451"
	I0923 10:22:39.135775   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.135782   11896 addons.go:234] Setting addon ingress-dns=true in "addons-230451"
	I0923 10:22:39.135791   11896 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-230451"
	I0923 10:22:39.135814   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135811   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135827   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.135864   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.136234   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.136268   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.136281   11896 addons.go:69] Setting gcp-auth=true in profile "addons-230451"
	I0923 10:22:39.136303   11896 mustload.go:65] Loading cluster: addons-230451
	I0923 10:22:39.136368   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.136406   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.134746   11896 addons.go:69] Setting ingress=true in profile "addons-230451"
	I0923 10:22:39.136467   11896 addons.go:234] Setting addon ingress=true in "addons-230451"
	I0923 10:22:39.136921   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:39.137052   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137087   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137214   11896 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-230451"
	I0923 10:22:39.137372   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137507   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137538   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137549   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137614   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.137976   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137511   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.138578   11896 out.go:177] * Verifying Kubernetes components...
	I0923 10:22:39.139899   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:39.145488   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145585   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145613   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145654   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145676   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145800   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145841   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145871   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145891   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145914   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145918   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145952   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145983   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.161544   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0923 10:22:39.161884   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0923 10:22:39.162070   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0923 10:22:39.162264   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.162826   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.162851   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.162936   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.163040   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.163434   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.163454   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.163580   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0923 10:22:39.163764   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.163788   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.163840   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.163934   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I0923 10:22:39.164104   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.164684   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.164721   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.185510   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I0923 10:22:39.185571   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.185662   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.185706   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.185909   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.185926   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.186778   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.186932   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.186951   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.187346   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187387   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.187436   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187463   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.187522   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.187703   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187731   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.192887   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.193023   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.201290   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.201305   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.201348   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.201820   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.201838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.201956   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.201993   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.202335   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.229941   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I0923 10:22:39.229953   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0923 10:22:39.229981   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0923 10:22:39.230081   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32827
	I0923 10:22:39.229945   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43993
	I0923 10:22:39.230091   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0923 10:22:39.230158   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0923 10:22:39.230232   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0923 10:22:39.230239   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0923 10:22:39.230393   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.230446   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.231158   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231163   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231251   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231315   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231351   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231380   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231777   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0923 10:22:39.231833   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.231847   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.231916   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231949   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.232175   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232191   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232195   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232209   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232317   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232328   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232431   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232446   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232586   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232645   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232647   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232657   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232731   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232765   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232769   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232778   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232780   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232793   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232834   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.233524   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233547   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233528   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233605   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233669   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.233682   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.233731   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.233898   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.233933   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.233988   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234016   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234116   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.234147   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234176   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234491   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.234491   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234526   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.234552   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234889   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234926   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.235293   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.235441   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.236819   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.236838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.237864   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.238168   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.238717   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.240479   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.240843   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.240799   11896 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-230451"
	I0923 10:22:39.240943   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.241475   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:39.241513   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:39.241572   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.241620   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.241673   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:39.241694   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:39.241712   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:39.241728   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:39.241939   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:39.241966   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:39.241981   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 10:22:39.242061   11896 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 10:22:39.242209   11896 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:22:39.243364   11896 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:39.243382   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:22:39.243400   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.243621   11896 addons.go:234] Setting addon default-storageclass=true in "addons-230451"
	I0923 10:22:39.243659   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.244006   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.244048   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.245011   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:22:39.245411   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0923 10:22:39.245745   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.246261   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.246280   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.246342   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.246653   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.246702   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.246763   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.246918   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.247079   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.247234   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.247287   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.247413   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.248325   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:22:39.249556   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:22:39.250623   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:22:39.251623   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:22:39.252410   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I0923 10:22:39.252964   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.253331   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.253997   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:22:39.254684   11896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:22:39.255992   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.256016   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.256228   11896 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:39.256248   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:22:39.256266   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.256781   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:22:39.257114   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.258716   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:22:39.259215   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.259570   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.259591   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.259735   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:22:39.259749   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:22:39.259767   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.259814   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.259944   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.260065   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.260176   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.262079   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0923 10:22:39.262584   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.262683   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.263031   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.263060   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.263202   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.263213   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.263419   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.263572   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.263624   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.264175   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.264214   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.264455   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.264597   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.265940   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.265968   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.271246   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38035
	I0923 10:22:39.271789   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.272388   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.272405   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.272805   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.273028   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.274894   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0923 10:22:39.275213   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.275844   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.275867   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.276203   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.278018   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42367
	I0923 10:22:39.278347   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0923 10:22:39.278503   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.278767   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0923 10:22:39.278898   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.279182   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.279681   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.279702   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.279763   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.280273   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.280289   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.280330   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.280582   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.280689   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.280918   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.281367   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.281152   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0923 10:22:39.281714   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.281734   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.281796   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.281834   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.282057   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.282159   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.282388   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.282544   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.282560   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.282678   11896 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:22:39.283012   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.283243   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.283634   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:22:39.283650   11896 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:22:39.283668   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.283893   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.285400   11896 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:22:39.286497   11896 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:22:39.286503   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.286515   11896 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:22:39.286544   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.286846   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.286869   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.287301   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.287493   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.287665   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.287806   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.288302   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.288696   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0923 10:22:39.289083   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.289683   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.289701   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.290084   11896 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:22:39.290241   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.290292   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.290473   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.290735   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.290773   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.290925   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.291070   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.291212   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.291343   11896 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:39.291363   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:22:39.291378   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.291451   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.295024   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.295024   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.295085   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.295103   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.295534   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.295687   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.295814   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.297105   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.297670   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0923 10:22:39.297670   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0923 10:22:39.298051   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.298086   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.298472   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.298495   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0923 10:22:39.298498   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.298662   11896 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:22:39.298748   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.298766   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.298991   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.299054   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.299408   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.299577   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300091   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300214   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.300223   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.300609   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.300821   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300911   11896 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:22:39.301783   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.301909   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.301978   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I0923 10:22:39.302139   11896 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:22:39.302152   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:22:39.302178   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.302381   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.302852   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.302875   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.302984   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.303301   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.303431   11896 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:22:39.303515   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:22:39.303574   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.304688   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:22:39.304717   11896 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:22:39.304740   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.304744   11896 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:39.304807   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:22:39.304819   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.305822   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:39.307556   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0923 10:22:39.307586   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.307720   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:22:39.307774   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.308043   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.308066   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.308423   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.308972   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.309094   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.309127   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.308530   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.309353   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.309801   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.309838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.310129   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.310151   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.310205   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:39.310257   11896 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:22:39.310305   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.310367   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.310501   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.310551   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.310650   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.310779   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.311023   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.311548   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.311571   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.311666   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.311778   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:22:39.311805   11896 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:22:39.311825   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.311915   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.312185   11896 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:39.312202   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:22:39.312219   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.312343   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I0923 10:22:39.312499   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.312659   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.312900   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.312942   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.313158   11896 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:39.313227   11896 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:22:39.313245   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.313364   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.313398   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.313741   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.313923   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.315763   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.315810   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.316253   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.316283   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.316514   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.316662   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.316765   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.316924   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.317045   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.317358   11896 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:22:39.317533   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.317571   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.317710   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.317848   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.317973   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.318106   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.318191   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.318580   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.318598   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.318878   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.319048   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.319206   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.319289   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.320204   11896 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:22:39.321465   11896 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:39.321479   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:22:39.321491   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.323996   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.324361   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.324386   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.324495   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.324602   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.324711   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.324788   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	W0923 10:22:39.325511   11896 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50144->192.168.39.142:22: read: connection reset by peer
	I0923 10:22:39.325542   11896 retry.go:31] will retry after 146.678947ms: ssh: handshake failed: read tcp 192.168.39.1:50144->192.168.39.142:22: read: connection reset by peer
	I0923 10:22:39.557159   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:39.580915   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:22:39.580948   11896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:39.596569   11896 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:22:39.596596   11896 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:22:39.610676   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:39.621265   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:39.641318   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:39.653920   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:39.688552   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:22:39.688582   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:22:39.695267   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:22:39.695299   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:22:39.700872   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:39.701278   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:22:39.701293   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:22:39.730612   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:22:39.730640   11896 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:22:39.741177   11896 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:22:39.741202   11896 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:22:39.775359   11896 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:22:39.775388   11896 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:22:39.777672   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:39.829748   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:22:39.829779   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:22:39.845681   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:22:39.845709   11896 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:22:39.868956   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:22:39.868979   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:22:39.878049   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:22:39.878072   11896 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:22:39.910637   11896 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:22:39.910662   11896 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:22:39.925074   11896 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:39.925100   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:22:39.964060   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:22:39.964082   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:22:40.059843   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:40.059864   11896 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:22:40.073448   11896 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:22:40.073471   11896 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:22:40.094580   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:22:40.094602   11896 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:22:40.102412   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:22:40.102434   11896 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:22:40.111856   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:22:40.111870   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:22:40.149555   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:40.244365   11896 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:22:40.244393   11896 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:22:40.286452   11896 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:40.286479   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:22:40.301058   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:40.319790   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:40.319818   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:22:40.395452   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:22:40.395478   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:22:40.420594   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:40.465580   11896 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:22:40.465611   11896 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:22:40.517028   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:40.586224   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:22:40.586264   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:22:40.716640   11896 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:40.716667   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:22:40.864786   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:22:40.864809   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:22:40.974629   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:41.329483   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:22:41.329520   11896 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:22:41.615715   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:22:41.615746   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:22:41.850585   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:22:41.850616   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:22:42.139510   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:42.139536   11896 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:22:42.203522   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.646323739s)
	I0923 10:22:42.203571   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.203579   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.203637   11896 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.62266543s)
	I0923 10:22:42.203652   11896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.622706839s)
	I0923 10:22:42.203673   11896 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 10:22:42.203984   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.204037   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.204051   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.204059   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.204072   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.204292   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.204308   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.204357   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.204648   11896 node_ready.go:35] waiting up to 6m0s for node "addons-230451" to be "Ready" ...
	I0923 10:22:42.265962   11896 node_ready.go:49] node "addons-230451" has status "Ready":"True"
	I0923 10:22:42.265985   11896 node_ready.go:38] duration metric: took 61.313529ms for node "addons-230451" to be "Ready" ...
	I0923 10:22:42.265995   11896 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:22:42.382117   11896 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:42.433215   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:42.639353   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.028639151s)
	I0923 10:22:42.639403   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639414   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639437   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.018135683s)
	I0923 10:22:42.639481   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639496   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639513   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.99816104s)
	I0923 10:22:42.639574   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639591   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639699   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.639710   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.639718   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639731   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639808   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.639885   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.639923   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.639930   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.639937   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639944   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.640007   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.640014   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.640168   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.640182   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.641237   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.641246   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.641258   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.641266   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.641730   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.641744   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.815687   11896 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-230451" context rescaled to 1 replicas
	I0923 10:22:42.853390   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.853416   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.853662   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.853720   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:44.448550   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:46.283789   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:22:46.283834   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:46.286793   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.287202   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:46.287227   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.287394   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:46.287553   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:46.287738   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:46.287873   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:46.555575   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:22:46.623519   11896 addons.go:234] Setting addon gcp-auth=true in "addons-230451"
	I0923 10:22:46.623584   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:46.624001   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:46.624048   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:46.639512   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0923 10:22:46.639966   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:46.640495   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:46.640515   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:46.640853   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:46.641315   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:46.641348   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:46.656710   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0923 10:22:46.657190   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:46.657684   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:46.657706   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:46.658044   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:46.658273   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:46.659892   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:46.660080   11896 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:22:46.660106   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:46.662909   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.663305   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:46.663330   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.663560   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:46.663699   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:46.663835   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:46.663965   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:47.013493   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:47.307143   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.606234939s)
	I0923 10:22:47.307203   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307215   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307214   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.5295194s)
	I0923 10:22:47.307233   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307245   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307246   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.653288375s)
	I0923 10:22:47.307261   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.157672592s)
	I0923 10:22:47.307296   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307296   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307316   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307318   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307367   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.006265482s)
	I0923 10:22:47.307413   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307416   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.886776853s)
	I0923 10:22:47.307425   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307441   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307452   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307512   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.790448754s)
	W0923 10:22:47.307537   11896 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:47.307568   11896 retry.go:31] will retry after 312.840585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:47.307652   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.332993076s)
	I0923 10:22:47.307672   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307694   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307874   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.307912   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.307930   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307936   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307954   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.307957   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307963   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307966   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.307973   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307977   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307984   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307941   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308023   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308030   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308072   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308075   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308102   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308105   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308114   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308121   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308128   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308132   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308135   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308138   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308142   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308145   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308165   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308177   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308185   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308191   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.309012   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309037   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309044   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309052   11896 addons.go:475] Verifying addon registry=true in "addons-230451"
	I0923 10:22:47.309241   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309250   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309257   11896 addons.go:475] Verifying addon metrics-server=true in "addons-230451"
	I0923 10:22:47.309419   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309453   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309460   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309479   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309499   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309736   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309772   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309779   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.310028   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.310059   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.310066   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.311116   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.311130   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.311151   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.311171   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.312036   11896 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-230451 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:22:47.312654   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.312668   11896 out.go:177] * Verifying registry addon...
	I0923 10:22:47.312738   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.312748   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.312802   11896 addons.go:475] Verifying addon ingress=true in "addons-230451"
	I0923 10:22:47.313891   11896 out.go:177] * Verifying ingress addon...
	I0923 10:22:47.314808   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:22:47.315984   11896 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:22:47.333135   11896 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:22:47.333156   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.333672   11896 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:22:47.333694   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.362191   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.362210   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.362500   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.362519   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.620787   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:47.853958   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.854430   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.976575   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.543318151s)
	I0923 10:22:47.976615   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.976627   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.976662   11896 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.31655795s)
	I0923 10:22:47.976916   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.976936   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.976944   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.976951   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.977493   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.977493   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.977516   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.977530   11896 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-230451"
	I0923 10:22:47.978353   11896 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:22:47.979244   11896 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:22:47.980816   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:47.981547   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:22:47.981951   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:22:47.981965   11896 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:22:48.012863   11896 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:22:48.012883   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.081072   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:22:48.081094   11896 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:22:48.235021   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:48.235041   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:22:48.323476   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.325316   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:48.329262   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.487988   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.823283   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.823712   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.987157   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.319059   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.320824   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.394285   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:49.486336   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.828379   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.845245   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.018644   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.230146   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.609312903s)
	I0923 10:22:50.230207   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230224   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230234   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.904884388s)
	I0923 10:22:50.230272   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230290   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230489   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230525   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230539   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230546   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230590   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230616   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230653   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230664   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230671   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230801   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230830   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230834   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230842   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230852   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230861   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.232850   11896 addons.go:475] Verifying addon gcp-auth=true in "addons-230451"
	I0923 10:22:50.234749   11896 out.go:177] * Verifying gcp-auth addon...
	I0923 10:22:50.236715   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:22:50.240230   11896 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:22:50.240245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.341082   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.341419   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.485879   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.741139   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.819391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.822087   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.987076   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.240553   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.318867   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.320884   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.487367   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.740284   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.818704   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.821561   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.888695   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:51.986219   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.241303   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.320629   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.321209   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.486705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.740428   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.819857   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.820725   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.986468   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.241277   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.318492   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.320484   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.520510   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.969717   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.974986   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.975544   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.977863   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:53.986625   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.240759   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.320774   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.321373   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.486278   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.740966   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.819228   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.822185   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.986658   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.240365   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.318431   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.320427   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.486106   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.740761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.823261   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.825324   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.989815   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.241561   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.320639   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.320643   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.388229   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:56.487473   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.740723   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.819638   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.821374   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.986618   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.241599   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.319347   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.320708   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.486908   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.740748   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.820700   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.820754   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.987523   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.239942   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.319913   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.320838   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.389727   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:58.488040   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.741176   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.818677   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.819952   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.986499   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.240344   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.319170   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.321183   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.486469   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.740550   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.819952   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.823020   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.986806   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.240835   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.319990   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.321306   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.486611   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.740067   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.820118   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.821668   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.889293   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:00.986752   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.240810   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.321217   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.321511   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.486551   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.741019   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.819706   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.820249   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.986133   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.240968   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.319524   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.322199   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.493692   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.740885   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.819358   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.821237   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.224620   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.337753   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.338071   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.338115   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.387890   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:03.485468   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.739963   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.820105   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.820454   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.986601   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.240576   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.321031   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.321397   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.485628   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.007814   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.008134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.008442   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.011226   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.260975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.320236   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.321513   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.389023   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:05.487041   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.740227   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.818341   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.819725   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.986304   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.240486   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.318856   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.321629   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.486680   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.740290   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.820149   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.820293   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.986074   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.240910   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.319345   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.320504   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.485787   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.740373   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.820179   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.821686   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.888632   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:07.986582   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.239642   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.319453   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.321440   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.486021   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.741278   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.818653   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.820061   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.987104   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.242250   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.319190   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.320606   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.487395   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.740299   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.818478   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.820810   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.985704   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.240100   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.318707   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.320481   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.391013   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:10.486242   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.740836   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.819488   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.820601   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.986709   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.241401   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.318575   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.320781   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.486517   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.740599   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.819000   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.820650   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.985664   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.241013   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.320039   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.320366   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.486654   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.740430   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.819149   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.821095   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.887785   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:12.986107   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.241268   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.318846   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.320609   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.486601   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.740348   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.819265   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.820668   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.986922   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.240485   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.320070   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:14.320544   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.910906   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.923120   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:15.012269   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.012603   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.012605   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.013481   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.241391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.342450   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.342933   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.487968   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.741013   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.819807   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.820519   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.986818   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.240849   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.318613   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.319887   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.486621   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.741530   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.818963   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.820103   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.986250   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.241331   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.318639   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.319759   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.388335   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:17.486169   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.740440   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.818651   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.820082   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.986722   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.240851   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.319266   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.321957   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.486827   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.749479   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.818898   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.819965   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.986655   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.353395   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.353455   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.353980   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.388491   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:19.486286   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.740811   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.819265   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.821465   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.987794   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.241615   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.343341   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.345086   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.485876   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.741706   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.822445   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.822885   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.986251   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.241243   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.342973   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.343648   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.388636   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:21.486389   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.741586   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.820057   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.820872   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.986245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.240821   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.321008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.321506   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.487367   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.746761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.845229   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.845516   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.889257   11896 pod_ready.go:93] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.889286   11896 pod_ready.go:82] duration metric: took 40.507126685s for pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.889299   11896 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.891229   11896 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kvrjl" not found
	I0923 10:23:22.891254   11896 pod_ready.go:82] duration metric: took 1.946573ms for pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace to be "Ready" ...
	E0923 10:23:22.891266   11896 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kvrjl" not found
	I0923 10:23:22.891274   11896 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.899549   11896 pod_ready.go:93] pod "etcd-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.899575   11896 pod_ready.go:82] duration metric: took 8.292332ms for pod "etcd-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.899586   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.906049   11896 pod_ready.go:93] pod "kube-apiserver-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.906074   11896 pod_ready.go:82] duration metric: took 6.480206ms for pod "kube-apiserver-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.906086   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.910833   11896 pod_ready.go:93] pod "kube-controller-manager-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.910859   11896 pod_ready.go:82] duration metric: took 4.764833ms for pod "kube-controller-manager-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.910872   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2f5tn" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.986668   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.089873   11896 pod_ready.go:93] pod "kube-proxy-2f5tn" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.089900   11896 pod_ready.go:82] duration metric: took 179.019892ms for pod "kube-proxy-2f5tn" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.089912   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.241038   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.320388   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.322190   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.486569   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.487599   11896 pod_ready.go:93] pod "kube-scheduler-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.487631   11896 pod_ready.go:82] duration metric: took 397.7086ms for pod "kube-scheduler-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.487644   11896 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.740324   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.818859   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.819999   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.886465   11896 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.886497   11896 pod_ready.go:82] duration metric: took 398.839138ms for pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.886507   11896 pod_ready.go:39] duration metric: took 41.620501569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:23:23.886523   11896 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:23:23.886570   11896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:23:23.914996   11896 api_server.go:72] duration metric: took 44.780704115s to wait for apiserver process to appear ...
	I0923 10:23:23.915024   11896 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:23:23.915046   11896 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0923 10:23:23.920072   11896 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0923 10:23:23.921132   11896 api_server.go:141] control plane version: v1.31.1
	I0923 10:23:23.921159   11896 api_server.go:131] duration metric: took 6.126816ms to wait for apiserver health ...
	I0923 10:23:23.921169   11896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:23:24.437367   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.437846   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.438079   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.438323   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.442864   11896 system_pods.go:59] 17 kube-system pods found
	I0923 10:23:24.442893   11896 system_pods.go:61] "coredns-7c65d6cfc9-7mfbw" [04d690db-b3f4-4949-ba3f-7bd3a74f4eb6] Running
	I0923 10:23:24.442904   11896 system_pods.go:61] "csi-hostpath-attacher-0" [215bba0a-54bf-45ec-a6cd-92f89ad62dac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:23:24.442914   11896 system_pods.go:61] "csi-hostpath-resizer-0" [651d7af5-c66c-4a47-a274-97f99744e66e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:23:24.442930   11896 system_pods.go:61] "csi-hostpathplugin-8mdng" [e1e36834-e18e-4390-bb18-a360cde6394c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:23:24.442939   11896 system_pods.go:61] "etcd-addons-230451" [0e8cdf9c-cbce-459d-be1e-613c2a79cb79] Running
	I0923 10:23:24.442949   11896 system_pods.go:61] "kube-apiserver-addons-230451" [7916049b-c9ce-4de7-a7bc-4faa37c8ee80] Running
	I0923 10:23:24.442954   11896 system_pods.go:61] "kube-controller-manager-addons-230451" [68366320-29aa-47d0-a8d1-64cf99d3c206] Running
	I0923 10:23:24.442963   11896 system_pods.go:61] "kube-ingress-dns-minikube" [c962d61b-b651-40b4-b128-49b4f1966a46] Running
	I0923 10:23:24.442968   11896 system_pods.go:61] "kube-proxy-2f5tn" [ecde87e2-ab31-4b8b-9c74-67efa7870d45] Running
	I0923 10:23:24.442976   11896 system_pods.go:61] "kube-scheduler-addons-230451" [faeada60-3597-4fa5-bf52-c211a79bad29] Running
	I0923 10:23:24.442985   11896 system_pods.go:61] "metrics-server-84c5f94fbc-vx2z2" [e950a717-9855-4b25-82a8-ac71b9a3a180] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:23:24.442993   11896 system_pods.go:61] "nvidia-device-plugin-daemonset-t2lzg" [6608f635-89c8-4811-9dca-ae138dbe1bd9] Running
	I0923 10:23:24.443002   11896 system_pods.go:61] "registry-66c9cd494c-7z2xv" [71f47a69-a374-4586-8d8b-0ec84aeee203] Running
	I0923 10:23:24.443009   11896 system_pods.go:61] "registry-proxy-kwn7c" [fab26ceb-8538-4146-9f14-955f715b3dd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:23:24.443020   11896 system_pods.go:61] "snapshot-controller-56fcc65765-mtclj" [4d040c25-f747-448f-81e3-46dd810a9b80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.443030   11896 system_pods.go:61] "snapshot-controller-56fcc65765-zc5h7" [a8f9592b-9ae4-4ef5-aaeb-a421f92692bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.443039   11896 system_pods.go:61] "storage-provisioner" [c2bd96dc-bf5a-4a77-83f4-de923c76367f] Running
	I0923 10:23:24.443049   11896 system_pods.go:74] duration metric: took 521.872993ms to wait for pod list to return data ...
	I0923 10:23:24.443060   11896 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:23:24.445709   11896 default_sa.go:45] found service account: "default"
	I0923 10:23:24.445725   11896 default_sa.go:55] duration metric: took 2.659813ms for default service account to be created ...
	I0923 10:23:24.445731   11896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:23:24.486762   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.493551   11896 system_pods.go:86] 17 kube-system pods found
	I0923 10:23:24.493583   11896 system_pods.go:89] "coredns-7c65d6cfc9-7mfbw" [04d690db-b3f4-4949-ba3f-7bd3a74f4eb6] Running
	I0923 10:23:24.493595   11896 system_pods.go:89] "csi-hostpath-attacher-0" [215bba0a-54bf-45ec-a6cd-92f89ad62dac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:23:24.493604   11896 system_pods.go:89] "csi-hostpath-resizer-0" [651d7af5-c66c-4a47-a274-97f99744e66e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:23:24.493618   11896 system_pods.go:89] "csi-hostpathplugin-8mdng" [e1e36834-e18e-4390-bb18-a360cde6394c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:23:24.493625   11896 system_pods.go:89] "etcd-addons-230451" [0e8cdf9c-cbce-459d-be1e-613c2a79cb79] Running
	I0923 10:23:24.493633   11896 system_pods.go:89] "kube-apiserver-addons-230451" [7916049b-c9ce-4de7-a7bc-4faa37c8ee80] Running
	I0923 10:23:24.493642   11896 system_pods.go:89] "kube-controller-manager-addons-230451" [68366320-29aa-47d0-a8d1-64cf99d3c206] Running
	I0923 10:23:24.493650   11896 system_pods.go:89] "kube-ingress-dns-minikube" [c962d61b-b651-40b4-b128-49b4f1966a46] Running
	I0923 10:23:24.493658   11896 system_pods.go:89] "kube-proxy-2f5tn" [ecde87e2-ab31-4b8b-9c74-67efa7870d45] Running
	I0923 10:23:24.493666   11896 system_pods.go:89] "kube-scheduler-addons-230451" [faeada60-3597-4fa5-bf52-c211a79bad29] Running
	I0923 10:23:24.493677   11896 system_pods.go:89] "metrics-server-84c5f94fbc-vx2z2" [e950a717-9855-4b25-82a8-ac71b9a3a180] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:23:24.493685   11896 system_pods.go:89] "nvidia-device-plugin-daemonset-t2lzg" [6608f635-89c8-4811-9dca-ae138dbe1bd9] Running
	I0923 10:23:24.493693   11896 system_pods.go:89] "registry-66c9cd494c-7z2xv" [71f47a69-a374-4586-8d8b-0ec84aeee203] Running
	I0923 10:23:24.493704   11896 system_pods.go:89] "registry-proxy-kwn7c" [fab26ceb-8538-4146-9f14-955f715b3dd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:23:24.493716   11896 system_pods.go:89] "snapshot-controller-56fcc65765-mtclj" [4d040c25-f747-448f-81e3-46dd810a9b80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.493727   11896 system_pods.go:89] "snapshot-controller-56fcc65765-zc5h7" [a8f9592b-9ae4-4ef5-aaeb-a421f92692bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.493735   11896 system_pods.go:89] "storage-provisioner" [c2bd96dc-bf5a-4a77-83f4-de923c76367f] Running
	I0923 10:23:24.493746   11896 system_pods.go:126] duration metric: took 48.009337ms to wait for k8s-apps to be running ...
	I0923 10:23:24.493758   11896 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:23:24.493809   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:23:24.513529   11896 system_svc.go:56] duration metric: took 19.75998ms WaitForService to wait for kubelet
	I0923 10:23:24.513564   11896 kubeadm.go:582] duration metric: took 45.379276732s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:23:24.513588   11896 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:23:24.686932   11896 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:23:24.686965   11896 node_conditions.go:123] node cpu capacity is 2
	I0923 10:23:24.686977   11896 node_conditions.go:105] duration metric: took 173.384337ms to run NodePressure ...
	I0923 10:23:24.686989   11896 start.go:241] waiting for startup goroutines ...
	I0923 10:23:24.740644   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.819562   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.820700   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.987200   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.241300   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.343424   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.343684   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.488088   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.740686   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.823744   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.824711   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.986603   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.245648   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.319158   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.320408   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.486134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.741656   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.818867   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.820585   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.986548   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.240557   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.319023   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.320864   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.486855   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.740443   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.820340   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.820749   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.985688   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.240798   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.319348   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.320307   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.485922   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.740883   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.819269   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.821099   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.986140   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.241577   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.319821   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.320555   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.485837   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.739828   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.819216   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.820683   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.986090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.240500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.318390   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.320276   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.485561   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.740036   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.819427   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.820954   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.986481   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.242825   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.319201   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.321609   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.486421   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.740721   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.820745   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.821165   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.987716   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.240042   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.320623   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.320636   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.487536   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.740655   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.819092   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.820745   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.986500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.240919   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.319548   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.321128   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.486183   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.740178   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.818613   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.830934   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.234483   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.240705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.318188   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.321549   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.486252   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.741090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.818534   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.820864   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.986959   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.241200   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.318668   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.320010   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.487738   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.740755   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.846303   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.847461   11896 kapi.go:107] duration metric: took 48.532653767s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:23:35.986432   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.240073   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.320490   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.486975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.740607   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.821390   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.985931   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.240868   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.320823   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.486628   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.740321   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.819943   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.986559   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.240591   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.320406   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.485374   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.740067   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.821158   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.985749   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.241435   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.320711   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.487179   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.740799   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.820591   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.987098   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.239842   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.321547   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.485975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.740732   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.821115   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.985768   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.240307   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.320076   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.486615   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.739979   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.820446   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.985972   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.240670   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.320827   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.486416   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.740430   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.821019   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.986853   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.240848   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.320450   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.487018   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.740754   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.841792   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.986488   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.240295   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.320589   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.485911   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.741445   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.820755   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.987203   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.243595   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.320568   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.490033   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.740061   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.821180   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.988792   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.240043   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.320715   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.487369   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.740245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.819995   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.986874   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.243429   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.345068   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.489391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.740015   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.820624   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.992212   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.241134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.323440   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.486090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.740606   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.820802   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.991332   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.240530   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.417715   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.487512   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.742506   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.820524   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.986559   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.239803   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.320349   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.486994   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.741224   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.821593   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.986425   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.240567   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.320321   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.486405   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.740877   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.820749   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.986484   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.240827   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.320722   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.487461   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.740499   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.841584   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.986500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.241311   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.324855   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.487424   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.740118   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.824677   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.985851   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.240751   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.320803   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.487062   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.740218   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.831563   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.987830   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.240818   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.332865   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.501106   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.740363   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.822929   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.990443   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.241141   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.806895   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.807674   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.808159   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.820644   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.986084   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.241298   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.327433   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.487016   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.740517   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.820018   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.986945   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.240591   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.321016   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.487366   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.740865   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.820699   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.985850   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.479008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.479029   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.489051   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.741335   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.842531   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.986871   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.240003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.320593   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.487659   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.739808   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.824778   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.986705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.241008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.320728   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.486320   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.742003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.820606   11896 kapi.go:107] duration metric: took 1m14.504617876s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:24:01.986382   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.240173   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.510479   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.759085   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.989516   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.240478   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.486506   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.739595   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.987737   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.240394   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.485945   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.740361   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.987426   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.241017   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.486902   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.740789   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.986398   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.240422   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.488497   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.740174   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.986390   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.239997   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.486563   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.740856   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.985705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.239980   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.487157   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.740726   11896 kapi.go:107] duration metric: took 1m18.504006563s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:24:08.742218   11896 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-230451 cluster.
	I0923 10:24:08.743548   11896 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:24:08.744742   11896 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:24:08.986003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.487085   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.986761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.486537   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.996063   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.487998   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.986105   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.489482   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.986286   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.531021   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.985832   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.486937   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.988956   11896 kapi.go:107] duration metric: took 1m27.0074062s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:24:14.990655   11896 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, metrics-server, inspektor-gadget, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 10:24:14.991930   11896 addons.go:510] duration metric: took 1m35.857607898s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin default-storageclass metrics-server inspektor-gadget storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 10:24:14.991968   11896 start.go:246] waiting for cluster config update ...
	I0923 10:24:14.991993   11896 start.go:255] writing updated cluster config ...
	I0923 10:24:14.992266   11896 ssh_runner.go:195] Run: rm -f paused
	I0923 10:24:15.042846   11896 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:24:15.044785   11896 out.go:177] * Done! kubectl is now configured to use "addons-230451" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.435256694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087766435228122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b30ad63-02f8-4925-9eea-b560334c51e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.435806174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a565bc5f-a4cc-442d-aec8-92f2f1dd1fb1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.435879413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a565bc5f-a4cc-442d-aec8-92f2f1dd1fb1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.436213090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fabf94d10ff5910cdf91b9c74e38182768d3c0d979640e2a7b368d8426e419f,PodSandboxId:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727087759156985347,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7f36927a761c0252d6fb76a287d0becb9333ae1b3551c560e89951871b454e,PodSandboxId:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727087617216010464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9,PodSandboxId:82463f63435a78fe1403a783d6b2f2cf5669383376cc93f97a43df432d6089ce,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087042730349812,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-278z9,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a3bdc91-4b2f-4273-a400-dfdbdebdceec,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149,PodSandboxId:25adc288fa90499568a623cf8611ccbd69084fb34aa053fb1de9be25c9983a1c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087027257915121,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-b7shb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ecb9137f-5ed1-4769-9925-b2c4998f0058,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727087000210496909,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1727086963270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a565bc5f-a4cc-442d-aec8-92f2f1dd1fb1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.478494658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82448927-1710-46e5-b058-11913cd46fff name=/runtime.v1.RuntimeService/Version
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.478588756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82448927-1710-46e5-b058-11913cd46fff name=/runtime.v1.RuntimeService/Version
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.479919269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7139405a-3f8b-4407-acef-09abf21e6f61 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.481078292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087766481049387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7139405a-3f8b-4407-acef-09abf21e6f61 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.481787526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9bf4ab0-b9c6-4eb8-8e36-c526b8425cba name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.481839510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9bf4ab0-b9c6-4eb8-8e36-c526b8425cba name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.484706970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fabf94d10ff5910cdf91b9c74e38182768d3c0d979640e2a7b368d8426e419f,PodSandboxId:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727087759156985347,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7f36927a761c0252d6fb76a287d0becb9333ae1b3551c560e89951871b454e,PodSandboxId:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727087617216010464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9,PodSandboxId:82463f63435a78fe1403a783d6b2f2cf5669383376cc93f97a43df432d6089ce,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087042730349812,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-278z9,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a3bdc91-4b2f-4273-a400-dfdbdebdceec,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149,PodSandboxId:25adc288fa90499568a623cf8611ccbd69084fb34aa053fb1de9be25c9983a1c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087027257915121,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-b7shb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ecb9137f-5ed1-4769-9925-b2c4998f0058,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727087000210496909,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1727086963270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9bf4ab0-b9c6-4eb8-8e36-c526b8425cba name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.523416365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38c6f12c-40bc-4316-b1b7-4959229c1845 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.523494817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38c6f12c-40bc-4316-b1b7-4959229c1845 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.524579623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1bac3aca-0007-4be3-ad6d-05aca06170bd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.525689447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087766525661264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bac3aca-0007-4be3-ad6d-05aca06170bd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.526259450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=520a897d-4a18-438c-8afd-aad029c9c16e name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.526403217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=520a897d-4a18-438c-8afd-aad029c9c16e name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.526745549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fabf94d10ff5910cdf91b9c74e38182768d3c0d979640e2a7b368d8426e419f,PodSandboxId:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727087759156985347,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7f36927a761c0252d6fb76a287d0becb9333ae1b3551c560e89951871b454e,PodSandboxId:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727087617216010464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9,PodSandboxId:82463f63435a78fe1403a783d6b2f2cf5669383376cc93f97a43df432d6089ce,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087042730349812,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-278z9,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a3bdc91-4b2f-4273-a400-dfdbdebdceec,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149,PodSandboxId:25adc288fa90499568a623cf8611ccbd69084fb34aa053fb1de9be25c9983a1c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087027257915121,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-b7shb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ecb9137f-5ed1-4769-9925-b2c4998f0058,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727087000210496909,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1727086963270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=520a897d-4a18-438c-8afd-aad029c9c16e name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.565575290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3f32d5e-412d-42b8-8b3c-06df77dd673a name=/runtime.v1.RuntimeService/Version
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.565662043Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3f32d5e-412d-42b8-8b3c-06df77dd673a name=/runtime.v1.RuntimeService/Version
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.566712889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3802ebe-2d5b-47d5-b059-5597ad415c1f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.567820625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087766567796711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3802ebe-2d5b-47d5-b059-5597ad415c1f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.568443732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e3dca83-e686-4665-8fc2-d623618d68bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.568504656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e3dca83-e686-4665-8fc2-d623618d68bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:36:06 addons-230451 crio[662]: time="2024-09-23 10:36:06.568777969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fabf94d10ff5910cdf91b9c74e38182768d3c0d979640e2a7b368d8426e419f,PodSandboxId:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727087759156985347,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7f36927a761c0252d6fb76a287d0becb9333ae1b3551c560e89951871b454e,PodSandboxId:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727087617216010464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9,PodSandboxId:82463f63435a78fe1403a783d6b2f2cf5669383376cc93f97a43df432d6089ce,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087042730349812,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-278z9,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a3bdc91-4b2f-4273-a400-dfdbdebdceec,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149,PodSandboxId:25adc288fa90499568a623cf8611ccbd69084fb34aa053fb1de9be25c9983a1c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727087027257915121,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-b7shb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ecb9137f-5ed1-4769-9925-b2c4998f0058,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727087000210496909,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1727086963270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e3dca83-e686-4665-8fc2-d623618d68bf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0fabf94d10ff5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   8c51891f1ece5       hello-world-app-55bf9c44b4-trsjs
	5c7f36927a761       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   d5acbfd4821f0       nginx
	63f8091f52d77       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   7accadc369381       gcp-auth-89d5ffd79-r2dxj
	e06f961e39af1       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             12 minutes ago      Exited              patch                     2                   82463f63435a7       ingress-nginx-admission-patch-278z9
	1b37183ea0c55       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   25adc288fa904       ingress-nginx-admission-create-b7shb
	992df9568fa60       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   9b9a78bf3e3fb       metrics-server-84c5f94fbc-vx2z2
	48b883a7cf210       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   8f190e8711730       storage-provisioner
	6fed682ab380f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   248e92b5f5680       coredns-7c65d6cfc9-7mfbw
	6238ede2ce75e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   11212750411bf       kube-proxy-2f5tn
	9b030424709a2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   45cd3db2a1e7a       kube-scheduler-addons-230451
	e428589b0fa5f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   5a2773265dbdc       kube-controller-manager-addons-230451
	455a0db0cbf9d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   48d959ccb4da3       etcd-addons-230451
	853b9960a36de       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   35551829a0c35       kube-apiserver-addons-230451
	
	
	==> coredns [6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131] <==
	[INFO] 127.0.0.1:53719 - 30820 "HINFO IN 6685210372362929190.536412389867895458. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01361851s
	[INFO] 10.244.0.8:57781 - 24672 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0003346s
	[INFO] 10.244.0.8:57781 - 61805 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149843s
	[INFO] 10.244.0.8:51455 - 24269 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117247s
	[INFO] 10.244.0.8:51455 - 30147 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000132017s
	[INFO] 10.244.0.8:49756 - 27783 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008366s
	[INFO] 10.244.0.8:49756 - 27013 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096337s
	[INFO] 10.244.0.8:57401 - 50559 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000099583s
	[INFO] 10.244.0.8:57401 - 121 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000163833s
	[INFO] 10.244.0.8:41582 - 43809 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171459s
	[INFO] 10.244.0.8:41582 - 3879 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000206793s
	[INFO] 10.244.0.8:34747 - 26460 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006276s
	[INFO] 10.244.0.8:34747 - 25950 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029536s
	[INFO] 10.244.0.8:42596 - 15504 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050529s
	[INFO] 10.244.0.8:42596 - 29358 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049956s
	[INFO] 10.244.0.8:46828 - 21289 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000081739s
	[INFO] 10.244.0.8:46828 - 11311 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096602s
	[INFO] 10.244.0.21:47112 - 35978 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00044167s
	[INFO] 10.244.0.21:39898 - 22255 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008491s
	[INFO] 10.244.0.21:43466 - 53222 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131557s
	[INFO] 10.244.0.21:52335 - 61823 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159688s
	[INFO] 10.244.0.21:42381 - 33204 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118433s
	[INFO] 10.244.0.21:51980 - 28250 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104154s
	[INFO] 10.244.0.21:37226 - 50868 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00097457s
	[INFO] 10.244.0.21:35684 - 29625 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000645401s
	
	
	==> describe nodes <==
	Name:               addons-230451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-230451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-230451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_22_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-230451
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:22:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-230451
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:36:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:34:07 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:34:07 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:34:07 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:34:07 +0000   Mon, 23 Sep 2024 10:22:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    addons-230451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 610d00e132ff4d0bb3d2f3caf1b3d48a
	  System UUID:                610d00e1-32ff-4d0b-b3d2-f3caf1b3d48a
	  Boot ID:                    ccc8674b-e396-46a3-bf38-22f6c0d79432
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-trsjs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  gcp-auth                    gcp-auth-89d5ffd79-r2dxj                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-7mfbw                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-230451                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-230451             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-230451    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-2f5tn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-230451             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-vx2z2          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-230451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-230451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-230451 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-230451 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-230451 event: Registered Node addons-230451 in Controller
	
	
	==> dmesg <==
	[Sep23 10:23] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.997386] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.219809] kauditd_printk_skb: 26 callbacks suppressed
	[ +20.523154] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.175400] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.134104] kauditd_printk_skb: 71 callbacks suppressed
	[Sep23 10:24] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.640337] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.746008] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.771381] kauditd_printk_skb: 45 callbacks suppressed
	[Sep23 10:25] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:27] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:29] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:32] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.410642] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.215645] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.744354] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.359012] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:33] kauditd_printk_skb: 2 callbacks suppressed
	[ +26.799993] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.083276] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.110104] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.862454] kauditd_printk_skb: 37 callbacks suppressed
	[Sep23 10:35] kauditd_printk_skb: 6 callbacks suppressed
	[Sep23 10:36] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb] <==
	{"level":"info","ts":"2024-09-23T10:23:56.789725Z","caller":"traceutil/trace.go:171","msg":"trace[2052105943] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1046; }","duration":"386.955803ms","start":"2024-09-23T10:23:56.402762Z","end":"2024-09-23T10:23:56.789718Z","steps":["trace[2052105943] 'range keys from in-memory index tree'  (duration: 386.853751ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.789745Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.402719Z","time spent":"387.021008ms","remote":"127.0.0.1:56784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-23T10:23:56.789891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.104712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:56.789926Z","caller":"traceutil/trace.go:171","msg":"trace[1887252976] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1046; }","duration":"316.139111ms","start":"2024-09-23T10:23:56.473782Z","end":"2024-09-23T10:23:56.789921Z","steps":["trace[1887252976] 'range keys from in-memory index tree'  (duration: 316.059373ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.789943Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.473634Z","time spent":"316.304062ms","remote":"127.0.0.1:57028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-23T10:23:56.790488Z","caller":"traceutil/trace.go:171","msg":"trace[1993101087] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"300.658273ms","start":"2024-09-23T10:23:56.489821Z","end":"2024-09-23T10:23:56.790480Z","steps":["trace[1993101087] 'process raft request'  (duration: 297.906276ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.790623Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.489805Z","time spent":"300.723172ms","remote":"127.0.0.1:57094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" mod_revision:790 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" > >"}
	{"level":"info","ts":"2024-09-23T10:23:59.461550Z","caller":"traceutil/trace.go:171","msg":"trace[1713246877] linearizableReadLoop","detail":"{readStateIndex:1094; appliedIndex:1093; }","duration":"232.90659ms","start":"2024-09-23T10:23:59.228626Z","end":"2024-09-23T10:23:59.461533Z","steps":["trace[1713246877] 'read index received'  (duration: 231.853253ms)","trace[1713246877] 'applied index is now lower than readState.Index'  (duration: 1.052836ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:23:59.461773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.14172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:59.461821Z","caller":"traceutil/trace.go:171","msg":"trace[1810414376] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"233.215712ms","start":"2024-09-23T10:23:59.228599Z","end":"2024-09-23T10:23:59.461815Z","steps":["trace[1810414376] 'agreement among raft nodes before linearized reading'  (duration: 233.094125ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:23:59.461702Z","caller":"traceutil/trace.go:171","msg":"trace[1566092567] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"351.447386ms","start":"2024-09-23T10:23:59.110237Z","end":"2024-09-23T10:23:59.461684Z","steps":["trace[1566092567] 'process raft request'  (duration: 350.997358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:59.462122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.656543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:59.462168Z","caller":"traceutil/trace.go:171","msg":"trace[1196861560] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"154.708489ms","start":"2024-09-23T10:23:59.307453Z","end":"2024-09-23T10:23:59.462162Z","steps":["trace[1196861560] 'agreement among raft nodes before linearized reading'  (duration: 154.640705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:59.463122Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:59.110202Z","time spent":"351.753223ms","remote":"127.0.0.1:56906","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":699,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" mod_revision:1051 > success:<request_put:<key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" value_size:628 lease:839800514810162161 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" > >"}
	{"level":"info","ts":"2024-09-23T10:24:21.903648Z","caller":"traceutil/trace.go:171","msg":"trace[1089261884] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"329.698815ms","start":"2024-09-23T10:24:21.573933Z","end":"2024-09-23T10:24:21.903631Z","steps":["trace[1089261884] 'process raft request'  (duration: 329.594188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:24:21.903769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:24:21.573911Z","time spent":"329.789617ms","remote":"127.0.0.1:56998","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1190 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-23T10:32:22.866527Z","caller":"traceutil/trace.go:171","msg":"trace[1341451039] transaction","detail":"{read_only:false; response_revision:1943; number_of_response:1; }","duration":"135.103828ms","start":"2024-09-23T10:32:22.731398Z","end":"2024-09-23T10:32:22.866501Z","steps":["trace[1341451039] 'process raft request'  (duration: 134.961155ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:29.856569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-09-23T10:32:29.884999Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1510,"took":"27.805671ms","hash":3200741289,"current-db-size-bytes":6541312,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3637248,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T10:32:29.885056Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3200741289,"revision":1510,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T10:32:55.602366Z","caller":"traceutil/trace.go:171","msg":"trace[225191809] linearizableReadLoop","detail":"{readStateIndex:2316; appliedIndex:2315; }","duration":"126.227212ms","start":"2024-09-23T10:32:55.476064Z","end":"2024-09-23T10:32:55.602291Z","steps":["trace[225191809] 'read index received'  (duration: 126.065779ms)","trace[225191809] 'applied index is now lower than readState.Index'  (duration: 161.03µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:32:55.602562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.447391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:32:55.602588Z","caller":"traceutil/trace.go:171","msg":"trace[894733726] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2160; }","duration":"126.522421ms","start":"2024-09-23T10:32:55.476060Z","end":"2024-09-23T10:32:55.602582Z","steps":["trace[894733726] 'agreement among raft nodes before linearized reading'  (duration: 126.428208ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:55.602743Z","caller":"traceutil/trace.go:171","msg":"trace[43643442] transaction","detail":"{read_only:false; response_revision:2160; number_of_response:1; }","duration":"129.84545ms","start":"2024-09-23T10:32:55.472891Z","end":"2024-09-23T10:32:55.602737Z","steps":["trace[43643442] 'process raft request'  (duration: 129.312421ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:33:00.762090Z","caller":"traceutil/trace.go:171","msg":"trace[648338775] transaction","detail":"{read_only:false; response_revision:2169; number_of_response:1; }","duration":"288.031158ms","start":"2024-09-23T10:33:00.473384Z","end":"2024-09-23T10:33:00.761415Z","steps":["trace[648338775] 'process raft request'  (duration: 287.71469ms)"],"step_count":1}
	
	
	==> gcp-auth [63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b] <==
	2024/09/23 10:24:15 Ready to write response ...
	2024/09/23 10:24:15 Ready to marshal response ...
	2024/09/23 10:24:15 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:25 Ready to marshal response ...
	2024/09/23 10:32:25 Ready to write response ...
	2024/09/23 10:32:25 Ready to marshal response ...
	2024/09/23 10:32:25 Ready to write response ...
	2024/09/23 10:32:29 Ready to marshal response ...
	2024/09/23 10:32:29 Ready to write response ...
	2024/09/23 10:32:37 Ready to marshal response ...
	2024/09/23 10:32:37 Ready to write response ...
	2024/09/23 10:32:53 Ready to marshal response ...
	2024/09/23 10:32:53 Ready to write response ...
	2024/09/23 10:33:28 Ready to marshal response ...
	2024/09/23 10:33:28 Ready to write response ...
	2024/09/23 10:33:32 Ready to marshal response ...
	2024/09/23 10:33:32 Ready to write response ...
	2024/09/23 10:35:55 Ready to marshal response ...
	2024/09/23 10:35:55 Ready to write response ...
	
	
	==> kernel <==
	 10:36:06 up 14 min,  0 users,  load average: 0.41, 0.64, 0.53
	Linux addons-230451 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e] <==
	E0923 10:24:23.987293       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.69.103:443: connect: connection refused" logger="UnhandledError"
	E0923 10:24:23.993204       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.69.103:443: connect: connection refused" logger="UnhandledError"
	I0923 10:24:24.062155       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 10:32:18.858064       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.199.8"}
	E0923 10:32:53.563750       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 10:33:08.344205       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 10:33:27.592624       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 10:33:28.618746       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 10:33:32.583253       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 10:33:32.794255       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.115.172"}
	I0923 10:33:45.762961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.763069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:45.780962       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.781083       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:45.808036       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.808930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:45.811049       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.811683       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:45.937816       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.937953       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 10:33:46.808898       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 10:33:46.938287       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 10:33:46.947106       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0923 10:35:56.152469       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.183.72"}
	E0923 10:35:58.637129       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780] <==
	W0923 10:34:36.874285       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:36.874523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:01.161278       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:01.161567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:01.326597       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:01.326693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:15.201234       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:15.201295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:17.013393       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:17.013437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:37.233563       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:37.233714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:52.496098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:52.496149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:35:55.967621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.747013ms"
	I0923 10:35:55.978136       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.423885ms"
	I0923 10:35:55.978393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="159.48µs"
	I0923 10:35:55.993894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.103µs"
	W0923 10:35:56.925547       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:56.925605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:35:58.554165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.503µs"
	I0923 10:35:58.556798       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0923 10:35:58.565977       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0923 10:35:59.977494       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.791789ms"
	I0923 10:35:59.978634       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="97.99µs"
	
	
	==> kube-proxy [6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 10:22:43.920909       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 10:22:44.021992       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.142"]
	E0923 10:22:44.022096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:22:45.319016       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 10:22:45.319081       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 10:22:45.319124       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:22:45.327775       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:22:45.328048       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:22:45.328078       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:22:45.345796       1 config.go:199] "Starting service config controller"
	I0923 10:22:45.345835       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:22:45.345866       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:22:45.345870       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:22:45.350777       1 config.go:328] "Starting node config controller"
	I0923 10:22:45.350807       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:22:45.446542       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:22:45.446598       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:22:45.450897       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe] <==
	W0923 10:22:31.294807       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:22:31.294862       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:22:32.090971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:22:32.091289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.095004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:22:32.095037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.148723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.148834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.209219       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:22:32.209362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.290354       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.290448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.370809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.370910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.393003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.393122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.446838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:22:32.446961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.464976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:32.465158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.550414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:22:32.550554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.715850       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:22:32.715995       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 10:22:34.754020       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:35:56 addons-230451 kubelet[1205]: I0923 10:35:56.020808    1205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/144a678c-016e-44a9-82ac-25f14e9771c8-gcp-creds\") pod \"hello-world-app-55bf9c44b4-trsjs\" (UID: \"144a678c-016e-44a9-82ac-25f14e9771c8\") " pod="default/hello-world-app-55bf9c44b4-trsjs"
	Sep 23 10:35:57 addons-230451 kubelet[1205]: I0923 10:35:57.230297    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twfjb\" (UniqueName: \"kubernetes.io/projected/c962d61b-b651-40b4-b128-49b4f1966a46-kube-api-access-twfjb\") pod \"c962d61b-b651-40b4-b128-49b4f1966a46\" (UID: \"c962d61b-b651-40b4-b128-49b4f1966a46\") "
	Sep 23 10:35:57 addons-230451 kubelet[1205]: I0923 10:35:57.236555    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c962d61b-b651-40b4-b128-49b4f1966a46-kube-api-access-twfjb" (OuterVolumeSpecName: "kube-api-access-twfjb") pod "c962d61b-b651-40b4-b128-49b4f1966a46" (UID: "c962d61b-b651-40b4-b128-49b4f1966a46"). InnerVolumeSpecName "kube-api-access-twfjb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:35:57 addons-230451 kubelet[1205]: I0923 10:35:57.331291    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-twfjb\" (UniqueName: \"kubernetes.io/projected/c962d61b-b651-40b4-b128-49b4f1966a46-kube-api-access-twfjb\") on node \"addons-230451\" DevicePath \"\""
	Sep 23 10:35:57 addons-230451 kubelet[1205]: I0923 10:35:57.937610    1205 scope.go:117] "RemoveContainer" containerID="da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf"
	Sep 23 10:35:57 addons-230451 kubelet[1205]: I0923 10:35:57.964219    1205 scope.go:117] "RemoveContainer" containerID="da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf"
	Sep 23 10:35:57 addons-230451 kubelet[1205]: E0923 10:35:57.964830    1205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf\": container with ID starting with da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf not found: ID does not exist" containerID="da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf"
	Sep 23 10:35:57 addons-230451 kubelet[1205]: I0923 10:35:57.964879    1205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf"} err="failed to get container status \"da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf\": rpc error: code = NotFound desc = could not find container \"da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf\": container with ID starting with da7f78da3232567cfbee26dfa7812e1a19702d5d6e98fb4d5b6b3faf4780a2cf not found: ID does not exist"
	Sep 23 10:35:58 addons-230451 kubelet[1205]: E0923 10:35:58.709840    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7195e8e7-df5f-4972-ac47-55b4552c6aba"
	Sep 23 10:35:59 addons-230451 kubelet[1205]: I0923 10:35:59.712068    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a3bdc91-4b2f-4273-a400-dfdbdebdceec" path="/var/lib/kubelet/pods/8a3bdc91-4b2f-4273-a400-dfdbdebdceec/volumes"
	Sep 23 10:35:59 addons-230451 kubelet[1205]: I0923 10:35:59.712607    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c962d61b-b651-40b4-b128-49b4f1966a46" path="/var/lib/kubelet/pods/c962d61b-b651-40b4-b128-49b4f1966a46/volumes"
	Sep 23 10:35:59 addons-230451 kubelet[1205]: I0923 10:35:59.712954    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb9137f-5ed1-4769-9925-b2c4998f0058" path="/var/lib/kubelet/pods/ecb9137f-5ed1-4769-9925-b2c4998f0058/volumes"
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.866568    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70182994-4ec2-4cc8-a4b3-754d8223e9c5-webhook-cert\") pod \"70182994-4ec2-4cc8-a4b3-754d8223e9c5\" (UID: \"70182994-4ec2-4cc8-a4b3-754d8223e9c5\") "
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.866632    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72xj2\" (UniqueName: \"kubernetes.io/projected/70182994-4ec2-4cc8-a4b3-754d8223e9c5-kube-api-access-72xj2\") pod \"70182994-4ec2-4cc8-a4b3-754d8223e9c5\" (UID: \"70182994-4ec2-4cc8-a4b3-754d8223e9c5\") "
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.868699    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70182994-4ec2-4cc8-a4b3-754d8223e9c5-kube-api-access-72xj2" (OuterVolumeSpecName: "kube-api-access-72xj2") pod "70182994-4ec2-4cc8-a4b3-754d8223e9c5" (UID: "70182994-4ec2-4cc8-a4b3-754d8223e9c5"). InnerVolumeSpecName "kube-api-access-72xj2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.869621    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70182994-4ec2-4cc8-a4b3-754d8223e9c5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "70182994-4ec2-4cc8-a4b3-754d8223e9c5" (UID: "70182994-4ec2-4cc8-a4b3-754d8223e9c5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.963805    1205 scope.go:117] "RemoveContainer" containerID="c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442"
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.966905    1205 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70182994-4ec2-4cc8-a4b3-754d8223e9c5-webhook-cert\") on node \"addons-230451\" DevicePath \"\""
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.966920    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-72xj2\" (UniqueName: \"kubernetes.io/projected/70182994-4ec2-4cc8-a4b3-754d8223e9c5-kube-api-access-72xj2\") on node \"addons-230451\" DevicePath \"\""
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.984715    1205 scope.go:117] "RemoveContainer" containerID="c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442"
	Sep 23 10:36:01 addons-230451 kubelet[1205]: E0923 10:36:01.985214    1205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442\": container with ID starting with c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442 not found: ID does not exist" containerID="c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442"
	Sep 23 10:36:01 addons-230451 kubelet[1205]: I0923 10:36:01.985283    1205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442"} err="failed to get container status \"c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442\": rpc error: code = NotFound desc = could not find container \"c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442\": container with ID starting with c1e529969cb938e4ca7d4ab9e2288fd032bf55488375c186f4a899c9c3dfa442 not found: ID does not exist"
	Sep 23 10:36:03 addons-230451 kubelet[1205]: I0923 10:36:03.716090    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70182994-4ec2-4cc8-a4b3-754d8223e9c5" path="/var/lib/kubelet/pods/70182994-4ec2-4cc8-a4b3-754d8223e9c5/volumes"
	Sep 23 10:36:04 addons-230451 kubelet[1205]: E0923 10:36:04.057269    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087764056849201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:36:04 addons-230451 kubelet[1205]: E0923 10:36:04.057444    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087764056849201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024] <==
	I0923 10:22:46.156565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:22:46.196845       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:22:46.202503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:22:46.219408       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:22:46.219529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9!
	I0923 10:22:46.220596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dfe369ce-2e58-4a81-9323-18883c63569e", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9 became leader
	I0923 10:22:46.321402       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-230451 -n addons-230451
helpers_test.go:261: (dbg) Run:  kubectl --context addons-230451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-230451 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-230451 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-230451/192.168.39.142
	Start Time:       Mon, 23 Sep 2024 10:24:15 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ctzjs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ctzjs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/busybox to addons-230451
	  Normal   Pulling    10m (x4 over 11m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)   kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    99s (x43 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (300.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.158146ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vx2z2" [e950a717-9855-4b25-82a8-ac71b9a3a180] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004319637s
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (108.995084ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 9m57.210058675s

                                                
                                                
** /stderr **
I0923 10:32:36.211826   11139 retry.go:31] will retry after 2.37211704s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (66.695127ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 9m59.649269772s

                                                
                                                
** /stderr **
I0923 10:32:38.651471   11139 retry.go:31] will retry after 6.560988964s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (68.28997ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 10m6.27965029s

                                                
                                                
** /stderr **
I0923 10:32:45.281131   11139 retry.go:31] will retry after 8.458615435s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (63.55273ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 10m14.80279287s

                                                
                                                
** /stderr **
I0923 10:32:53.804416   11139 retry.go:31] will retry after 5.695864625s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (64.731942ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 10m20.563888571s

                                                
                                                
** /stderr **
I0923 10:32:59.565410   11139 retry.go:31] will retry after 8.429737093s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (66.075662ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 10m29.060313206s

                                                
                                                
** /stderr **
I0923 10:33:08.061716   11139 retry.go:31] will retry after 29.761382792s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (65.224005ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 10m58.88733653s

                                                
                                                
** /stderr **
I0923 10:33:37.888931   11139 retry.go:31] will retry after 39.126036909s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (61.815463ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 11m38.076633082s

                                                
                                                
** /stderr **
I0923 10:34:17.078265   11139 retry.go:31] will retry after 1m3.646448184s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (61.553032ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 12m41.789875689s

                                                
                                                
** /stderr **
I0923 10:35:20.791460   11139 retry.go:31] will retry after 47.848185492s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (61.874276ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 13m29.700129453s

                                                
                                                
** /stderr **
I0923 10:36:08.701785   11139 retry.go:31] will retry after 1m19.095801746s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-230451 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-230451 top pods -n kube-system: exit status 1 (60.857895ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7mfbw, age: 14m48.858990424s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-230451 -n addons-230451
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 logs -n 25: (1.31947575s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-056027                                                                     | download-only-056027 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-944972                                                                     | download-only-944972 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-056027                                                                     | download-only-056027 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-004546 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-004546                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34819                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-004546                                                                     | binary-mirror-004546 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-230451 --wait=true                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | -p addons-230451                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | -p addons-230451                                                                            |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-230451 ssh cat                                                                       | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | /opt/local-path-provisioner/pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| ip      | addons-230451 ip                                                                            | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-230451 addons                                                                        | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-230451                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-230451 ssh curl -s                                                                   | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-230451 addons                                                                        | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-230451 ip                                                                            | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-230451 addons disable                                                                | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-230451 addons                                                                        | addons-230451        | jenkins | v1.34.0 | 23 Sep 24 10:37 UTC | 23 Sep 24 10:37 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:54.509930   11896 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:54.510176   11896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:54.510185   11896 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:54.510189   11896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:54.510371   11896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:21:54.510927   11896 out.go:352] Setting JSON to false
	I0923 10:21:54.511749   11896 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":257,"bootTime":1727086657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:54.511839   11896 start.go:139] virtualization: kvm guest
	I0923 10:21:54.513820   11896 out.go:177] * [addons-230451] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:21:54.515097   11896 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:21:54.515105   11896 notify.go:220] Checking for updates...
	I0923 10:21:54.517574   11896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:54.518845   11896 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:21:54.519947   11896 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:54.520978   11896 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:21:54.521954   11896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:21:54.523196   11896 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:54.554453   11896 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 10:21:54.555559   11896 start.go:297] selected driver: kvm2
	I0923 10:21:54.555580   11896 start.go:901] validating driver "kvm2" against <nil>
	I0923 10:21:54.555601   11896 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:21:54.556616   11896 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:54.556711   11896 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 10:21:54.571291   11896 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 10:21:54.571371   11896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:54.571718   11896 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:54.571756   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:21:54.571824   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:21:54.571833   11896 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:54.571901   11896 start.go:340] cluster config:
	{Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:54.572023   11896 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:54.574799   11896 out.go:177] * Starting "addons-230451" primary control-plane node in "addons-230451" cluster
	I0923 10:21:54.575781   11896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:54.575828   11896 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:54.575840   11896 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:54.575908   11896 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:21:54.575919   11896 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:21:54.576245   11896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json ...
	I0923 10:21:54.576269   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json: {Name:mke557599469685c702152c654faebe5e1d076a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:54.576419   11896 start.go:360] acquireMachinesLock for addons-230451: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:21:54.576485   11896 start.go:364] duration metric: took 50.98µs to acquireMachinesLock for "addons-230451"
	I0923 10:21:54.576507   11896 start.go:93] Provisioning new machine with config: &{Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:21:54.576577   11896 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 10:21:54.577964   11896 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 10:21:54.578088   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:21:54.578137   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:21:54.592162   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0923 10:21:54.592680   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:21:54.593173   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:21:54.593196   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:21:54.593565   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:21:54.593723   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:21:54.593874   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:21:54.593988   11896 start.go:159] libmachine.API.Create for "addons-230451" (driver="kvm2")
	I0923 10:21:54.594024   11896 client.go:168] LocalClient.Create starting
	I0923 10:21:54.594063   11896 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:21:54.862234   11896 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:21:54.952456   11896 main.go:141] libmachine: Running pre-create checks...
	I0923 10:21:54.952476   11896 main.go:141] libmachine: (addons-230451) Calling .PreCreateCheck
	I0923 10:21:54.952976   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:21:54.953437   11896 main.go:141] libmachine: Creating machine...
	I0923 10:21:54.953450   11896 main.go:141] libmachine: (addons-230451) Calling .Create
	I0923 10:21:54.953678   11896 main.go:141] libmachine: (addons-230451) Creating KVM machine...
	I0923 10:21:54.954811   11896 main.go:141] libmachine: (addons-230451) DBG | found existing default KVM network
	I0923 10:21:54.955692   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:54.955529   11918 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0923 10:21:54.955752   11896 main.go:141] libmachine: (addons-230451) DBG | created network xml: 
	I0923 10:21:54.955775   11896 main.go:141] libmachine: (addons-230451) DBG | <network>
	I0923 10:21:54.955786   11896 main.go:141] libmachine: (addons-230451) DBG |   <name>mk-addons-230451</name>
	I0923 10:21:54.955801   11896 main.go:141] libmachine: (addons-230451) DBG |   <dns enable='no'/>
	I0923 10:21:54.955811   11896 main.go:141] libmachine: (addons-230451) DBG |   
	I0923 10:21:54.955821   11896 main.go:141] libmachine: (addons-230451) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 10:21:54.955831   11896 main.go:141] libmachine: (addons-230451) DBG |     <dhcp>
	I0923 10:21:54.955840   11896 main.go:141] libmachine: (addons-230451) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 10:21:54.955852   11896 main.go:141] libmachine: (addons-230451) DBG |     </dhcp>
	I0923 10:21:54.955859   11896 main.go:141] libmachine: (addons-230451) DBG |   </ip>
	I0923 10:21:54.955868   11896 main.go:141] libmachine: (addons-230451) DBG |   
	I0923 10:21:54.955876   11896 main.go:141] libmachine: (addons-230451) DBG | </network>
	I0923 10:21:54.955886   11896 main.go:141] libmachine: (addons-230451) DBG | 
	I0923 10:21:54.961052   11896 main.go:141] libmachine: (addons-230451) DBG | trying to create private KVM network mk-addons-230451 192.168.39.0/24...
	I0923 10:21:55.025203   11896 main.go:141] libmachine: (addons-230451) DBG | private KVM network mk-addons-230451 192.168.39.0/24 created
	I0923 10:21:55.025234   11896 main.go:141] libmachine: (addons-230451) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 ...
	I0923 10:21:55.025245   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.025189   11918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:55.025262   11896 main.go:141] libmachine: (addons-230451) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:21:55.025326   11896 main.go:141] libmachine: (addons-230451) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:21:55.288584   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.288456   11918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa...
	I0923 10:21:55.387986   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.387858   11918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/addons-230451.rawdisk...
	I0923 10:21:55.388016   11896 main.go:141] libmachine: (addons-230451) DBG | Writing magic tar header
	I0923 10:21:55.388026   11896 main.go:141] libmachine: (addons-230451) DBG | Writing SSH key tar header
	I0923 10:21:55.388034   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:55.387970   11918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 ...
	I0923 10:21:55.388050   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451
	I0923 10:21:55.388086   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451 (perms=drwx------)
	I0923 10:21:55.388098   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:21:55.388113   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:21:55.388129   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:21:55.388139   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:21:55.388148   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:55.388154   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:21:55.388171   11896 main.go:141] libmachine: (addons-230451) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:21:55.388180   11896 main.go:141] libmachine: (addons-230451) Creating domain...
	I0923 10:21:55.388192   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:21:55.388205   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:21:55.388216   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:21:55.388227   11896 main.go:141] libmachine: (addons-230451) DBG | Checking permissions on dir: /home
	I0923 10:21:55.388234   11896 main.go:141] libmachine: (addons-230451) DBG | Skipping /home - not owner
	I0923 10:21:55.389182   11896 main.go:141] libmachine: (addons-230451) define libvirt domain using xml: 
	I0923 10:21:55.389204   11896 main.go:141] libmachine: (addons-230451) <domain type='kvm'>
	I0923 10:21:55.389213   11896 main.go:141] libmachine: (addons-230451)   <name>addons-230451</name>
	I0923 10:21:55.389220   11896 main.go:141] libmachine: (addons-230451)   <memory unit='MiB'>4000</memory>
	I0923 10:21:55.389228   11896 main.go:141] libmachine: (addons-230451)   <vcpu>2</vcpu>
	I0923 10:21:55.389238   11896 main.go:141] libmachine: (addons-230451)   <features>
	I0923 10:21:55.389248   11896 main.go:141] libmachine: (addons-230451)     <acpi/>
	I0923 10:21:55.389257   11896 main.go:141] libmachine: (addons-230451)     <apic/>
	I0923 10:21:55.389264   11896 main.go:141] libmachine: (addons-230451)     <pae/>
	I0923 10:21:55.389273   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389291   11896 main.go:141] libmachine: (addons-230451)   </features>
	I0923 10:21:55.389303   11896 main.go:141] libmachine: (addons-230451)   <cpu mode='host-passthrough'>
	I0923 10:21:55.389308   11896 main.go:141] libmachine: (addons-230451)   
	I0923 10:21:55.389313   11896 main.go:141] libmachine: (addons-230451)   </cpu>
	I0923 10:21:55.389318   11896 main.go:141] libmachine: (addons-230451)   <os>
	I0923 10:21:55.389337   11896 main.go:141] libmachine: (addons-230451)     <type>hvm</type>
	I0923 10:21:55.389348   11896 main.go:141] libmachine: (addons-230451)     <boot dev='cdrom'/>
	I0923 10:21:55.389352   11896 main.go:141] libmachine: (addons-230451)     <boot dev='hd'/>
	I0923 10:21:55.389359   11896 main.go:141] libmachine: (addons-230451)     <bootmenu enable='no'/>
	I0923 10:21:55.389363   11896 main.go:141] libmachine: (addons-230451)   </os>
	I0923 10:21:55.389464   11896 main.go:141] libmachine: (addons-230451)   <devices>
	I0923 10:21:55.389496   11896 main.go:141] libmachine: (addons-230451)     <disk type='file' device='cdrom'>
	I0923 10:21:55.389515   11896 main.go:141] libmachine: (addons-230451)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/boot2docker.iso'/>
	I0923 10:21:55.389532   11896 main.go:141] libmachine: (addons-230451)       <target dev='hdc' bus='scsi'/>
	I0923 10:21:55.389544   11896 main.go:141] libmachine: (addons-230451)       <readonly/>
	I0923 10:21:55.389553   11896 main.go:141] libmachine: (addons-230451)     </disk>
	I0923 10:21:55.389565   11896 main.go:141] libmachine: (addons-230451)     <disk type='file' device='disk'>
	I0923 10:21:55.389576   11896 main.go:141] libmachine: (addons-230451)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:21:55.389584   11896 main.go:141] libmachine: (addons-230451)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/addons-230451.rawdisk'/>
	I0923 10:21:55.389594   11896 main.go:141] libmachine: (addons-230451)       <target dev='hda' bus='virtio'/>
	I0923 10:21:55.389602   11896 main.go:141] libmachine: (addons-230451)     </disk>
	I0923 10:21:55.389616   11896 main.go:141] libmachine: (addons-230451)     <interface type='network'>
	I0923 10:21:55.389629   11896 main.go:141] libmachine: (addons-230451)       <source network='mk-addons-230451'/>
	I0923 10:21:55.389639   11896 main.go:141] libmachine: (addons-230451)       <model type='virtio'/>
	I0923 10:21:55.389648   11896 main.go:141] libmachine: (addons-230451)     </interface>
	I0923 10:21:55.389658   11896 main.go:141] libmachine: (addons-230451)     <interface type='network'>
	I0923 10:21:55.389669   11896 main.go:141] libmachine: (addons-230451)       <source network='default'/>
	I0923 10:21:55.389678   11896 main.go:141] libmachine: (addons-230451)       <model type='virtio'/>
	I0923 10:21:55.389684   11896 main.go:141] libmachine: (addons-230451)     </interface>
	I0923 10:21:55.389696   11896 main.go:141] libmachine: (addons-230451)     <serial type='pty'>
	I0923 10:21:55.389707   11896 main.go:141] libmachine: (addons-230451)       <target port='0'/>
	I0923 10:21:55.389716   11896 main.go:141] libmachine: (addons-230451)     </serial>
	I0923 10:21:55.389725   11896 main.go:141] libmachine: (addons-230451)     <console type='pty'>
	I0923 10:21:55.389735   11896 main.go:141] libmachine: (addons-230451)       <target type='serial' port='0'/>
	I0923 10:21:55.389746   11896 main.go:141] libmachine: (addons-230451)     </console>
	I0923 10:21:55.389753   11896 main.go:141] libmachine: (addons-230451)     <rng model='virtio'>
	I0923 10:21:55.389772   11896 main.go:141] libmachine: (addons-230451)       <backend model='random'>/dev/random</backend>
	I0923 10:21:55.389789   11896 main.go:141] libmachine: (addons-230451)     </rng>
	I0923 10:21:55.389804   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389813   11896 main.go:141] libmachine: (addons-230451)     
	I0923 10:21:55.389825   11896 main.go:141] libmachine: (addons-230451)   </devices>
	I0923 10:21:55.389833   11896 main.go:141] libmachine: (addons-230451) </domain>
	I0923 10:21:55.389840   11896 main.go:141] libmachine: (addons-230451) 
	I0923 10:21:55.442274   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:1e:65:9c in network default
	I0923 10:21:55.442896   11896 main.go:141] libmachine: (addons-230451) Ensuring networks are active...
	I0923 10:21:55.442919   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:55.443620   11896 main.go:141] libmachine: (addons-230451) Ensuring network default is active
	I0923 10:21:55.443936   11896 main.go:141] libmachine: (addons-230451) Ensuring network mk-addons-230451 is active
	I0923 10:21:55.444473   11896 main.go:141] libmachine: (addons-230451) Getting domain xml...
	I0923 10:21:55.445327   11896 main.go:141] libmachine: (addons-230451) Creating domain...
	I0923 10:21:57.016016   11896 main.go:141] libmachine: (addons-230451) Waiting to get IP...
	I0923 10:21:57.016667   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.017033   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.017054   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.017010   11918 retry.go:31] will retry after 208.635315ms: waiting for machine to come up
	I0923 10:21:57.227392   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.227733   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.227756   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.227648   11918 retry.go:31] will retry after 297.216389ms: waiting for machine to come up
	I0923 10:21:57.526245   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.526673   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.526694   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.526643   11918 retry.go:31] will retry after 293.828552ms: waiting for machine to come up
	I0923 10:21:57.822073   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:57.822442   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:57.822463   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:57.822410   11918 retry.go:31] will retry after 602.044959ms: waiting for machine to come up
	I0923 10:21:58.425996   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:58.426504   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:58.426525   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:58.426453   11918 retry.go:31] will retry after 610.746842ms: waiting for machine to come up
	I0923 10:21:59.039341   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:59.039865   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:59.039886   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:59.039817   11918 retry.go:31] will retry after 688.678666ms: waiting for machine to come up
	I0923 10:21:59.730224   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:21:59.730635   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:21:59.730660   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:21:59.730596   11918 retry.go:31] will retry after 1.028645485s: waiting for machine to come up
	I0923 10:22:00.760735   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:00.761163   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:00.761193   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:00.761110   11918 retry.go:31] will retry after 973.08502ms: waiting for machine to come up
	I0923 10:22:01.735437   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:01.735826   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:01.735858   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:01.735768   11918 retry.go:31] will retry after 1.395648774s: waiting for machine to come up
	I0923 10:22:03.134422   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:03.134826   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:03.134854   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:03.134760   11918 retry.go:31] will retry after 1.707966873s: waiting for machine to come up
	I0923 10:22:04.844605   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:04.845022   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:04.845045   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:04.844996   11918 retry.go:31] will retry after 2.702470731s: waiting for machine to come up
	I0923 10:22:07.550535   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:07.550864   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:07.550880   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:07.550829   11918 retry.go:31] will retry after 2.889295682s: waiting for machine to come up
	I0923 10:22:10.441287   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:10.441659   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:10.441679   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:10.441632   11918 retry.go:31] will retry after 2.869623302s: waiting for machine to come up
	I0923 10:22:13.314625   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:13.315023   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find current IP address of domain addons-230451 in network mk-addons-230451
	I0923 10:22:13.315045   11896 main.go:141] libmachine: (addons-230451) DBG | I0923 10:22:13.314983   11918 retry.go:31] will retry after 3.640221936s: waiting for machine to come up
	I0923 10:22:16.958659   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:16.959119   11896 main.go:141] libmachine: (addons-230451) Found IP for machine: 192.168.39.142
	I0923 10:22:16.959156   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has current primary IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:16.959166   11896 main.go:141] libmachine: (addons-230451) Reserving static IP address...
	I0923 10:22:16.959462   11896 main.go:141] libmachine: (addons-230451) DBG | unable to find host DHCP lease matching {name: "addons-230451", mac: "52:54:00:23:7b:36", ip: "192.168.39.142"} in network mk-addons-230451
	I0923 10:22:17.029441   11896 main.go:141] libmachine: (addons-230451) DBG | Getting to WaitForSSH function...
	I0923 10:22:17.029468   11896 main.go:141] libmachine: (addons-230451) Reserved static IP address: 192.168.39.142
	I0923 10:22:17.029481   11896 main.go:141] libmachine: (addons-230451) Waiting for SSH to be available...
	I0923 10:22:17.031574   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.031976   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.032008   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.032179   11896 main.go:141] libmachine: (addons-230451) DBG | Using SSH client type: external
	I0923 10:22:17.032208   11896 main.go:141] libmachine: (addons-230451) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa (-rw-------)
	I0923 10:22:17.032242   11896 main.go:141] libmachine: (addons-230451) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:22:17.032261   11896 main.go:141] libmachine: (addons-230451) DBG | About to run SSH command:
	I0923 10:22:17.032275   11896 main.go:141] libmachine: (addons-230451) DBG | exit 0
	I0923 10:22:17.165353   11896 main.go:141] libmachine: (addons-230451) DBG | SSH cmd err, output: <nil>: 
	I0923 10:22:17.165603   11896 main.go:141] libmachine: (addons-230451) KVM machine creation complete!
	I0923 10:22:17.165853   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:22:17.166404   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:17.166615   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:17.166760   11896 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:22:17.166775   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:17.167984   11896 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:22:17.167997   11896 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:22:17.168002   11896 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:22:17.168007   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.170262   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.170628   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.170654   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.170753   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.170943   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.171091   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.171216   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.171352   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.171523   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.171532   11896 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:22:17.276650   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:17.276675   11896 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:22:17.276682   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.279238   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.279568   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.279618   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.279725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.279902   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.280049   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.280188   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.280328   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.280526   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.280539   11896 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:22:17.390222   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:22:17.390295   11896 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:22:17.390302   11896 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:22:17.390309   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.390534   11896 buildroot.go:166] provisioning hostname "addons-230451"
	I0923 10:22:17.390564   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.390733   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.393254   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.393637   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.393661   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.393806   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.393974   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.394097   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.394266   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.394503   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.394674   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.394685   11896 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-230451 && echo "addons-230451" | sudo tee /etc/hostname
	I0923 10:22:17.515225   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-230451
	
	I0923 10:22:17.515256   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.517989   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.518336   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.518363   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.518538   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.518711   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.518849   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.518973   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.519103   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.519305   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.519322   11896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-230451' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-230451/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-230451' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:22:17.634431   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:17.634459   11896 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:22:17.634507   11896 buildroot.go:174] setting up certificates
	I0923 10:22:17.634531   11896 provision.go:84] configureAuth start
	I0923 10:22:17.634546   11896 main.go:141] libmachine: (addons-230451) Calling .GetMachineName
	I0923 10:22:17.634804   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:17.637289   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.637645   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.637672   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.637796   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.639619   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.639935   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.639958   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.640107   11896 provision.go:143] copyHostCerts
	I0923 10:22:17.640166   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:22:17.640266   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:22:17.640357   11896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:22:17.640412   11896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.addons-230451 san=[127.0.0.1 192.168.39.142 addons-230451 localhost minikube]
	I0923 10:22:17.714679   11896 provision.go:177] copyRemoteCerts
	I0923 10:22:17.714730   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:22:17.714753   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.717181   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.717480   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.717505   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.717645   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.717825   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.717941   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.718046   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:17.804191   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:22:17.829062   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:22:17.853034   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 10:22:17.877800   11896 provision.go:87] duration metric: took 243.235441ms to configureAuth
	I0923 10:22:17.877829   11896 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:22:17.877983   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:17.878058   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:17.880387   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.880814   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:17.880840   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:17.881030   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:17.881209   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.881361   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:17.881549   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:17.881728   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:17.881938   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:17.881960   11896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:22:18.112582   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:22:18.112611   11896 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:22:18.112619   11896 main.go:141] libmachine: (addons-230451) Calling .GetURL
	I0923 10:22:18.114015   11896 main.go:141] libmachine: (addons-230451) DBG | Using libvirt version 6000000
	I0923 10:22:18.115892   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.116172   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.116200   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.116375   11896 main.go:141] libmachine: Docker is up and running!
	I0923 10:22:18.116385   11896 main.go:141] libmachine: Reticulating splines...
	I0923 10:22:18.116393   11896 client.go:171] duration metric: took 23.522358813s to LocalClient.Create
	I0923 10:22:18.116418   11896 start.go:167] duration metric: took 23.522430116s to libmachine.API.Create "addons-230451"
	I0923 10:22:18.116432   11896 start.go:293] postStartSetup for "addons-230451" (driver="kvm2")
	I0923 10:22:18.116444   11896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:22:18.116465   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.116705   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:22:18.116725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.118667   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.118943   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.118966   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.119088   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.119236   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.119375   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.119475   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.203671   11896 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:22:18.207849   11896 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:22:18.207881   11896 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:22:18.207965   11896 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:22:18.208002   11896 start.go:296] duration metric: took 91.564102ms for postStartSetup
	I0923 10:22:18.208041   11896 main.go:141] libmachine: (addons-230451) Calling .GetConfigRaw
	I0923 10:22:18.208600   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:18.210821   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.211132   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.211160   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.211370   11896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/config.json ...
	I0923 10:22:18.211568   11896 start.go:128] duration metric: took 23.634978913s to createHost
	I0923 10:22:18.211597   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.213764   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.214103   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.214126   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.214261   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.214411   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.214520   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.214653   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.214811   11896 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:18.214999   11896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0923 10:22:18.215010   11896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:22:18.322271   11896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727086938.296352149
	
	I0923 10:22:18.322297   11896 fix.go:216] guest clock: 1727086938.296352149
	I0923 10:22:18.322306   11896 fix.go:229] Guest: 2024-09-23 10:22:18.296352149 +0000 UTC Remote: 2024-09-23 10:22:18.211580004 +0000 UTC m=+23.734217766 (delta=84.772145ms)
	I0923 10:22:18.322326   11896 fix.go:200] guest clock delta is within tolerance: 84.772145ms
	I0923 10:22:18.322330   11896 start.go:83] releasing machines lock for "addons-230451", held for 23.74583569s
	I0923 10:22:18.322350   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.322592   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:18.325284   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.325621   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.325666   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.325767   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326263   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326436   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:18.326529   11896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:22:18.326593   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.326632   11896 ssh_runner.go:195] Run: cat /version.json
	I0923 10:22:18.326655   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:18.329047   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329309   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329394   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.329418   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329575   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.329694   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:18.329721   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:18.329725   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.329853   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.329920   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:18.329983   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.330068   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:18.330292   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:18.330417   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:18.438062   11896 ssh_runner.go:195] Run: systemctl --version
	I0923 10:22:18.444025   11896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:22:18.601874   11896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:22:18.607742   11896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:22:18.607802   11896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:18.624264   11896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:22:18.624289   11896 start.go:495] detecting cgroup driver to use...
	I0923 10:22:18.624345   11896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:22:18.639564   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:22:18.653568   11896 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:22:18.653621   11896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:22:18.667712   11896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:22:18.681874   11896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:22:18.792202   11896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:22:18.925990   11896 docker.go:233] disabling docker service ...
	I0923 10:22:18.926064   11896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:22:18.940378   11896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:22:18.953192   11896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:22:19.087815   11896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:22:19.203155   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:22:19.216978   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:22:19.235019   11896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:22:19.235096   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.245714   11896 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:22:19.245818   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.256490   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.267602   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.278326   11896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:22:19.289301   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.299699   11896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.317469   11896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:19.328378   11896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:22:19.338564   11896 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:22:19.338621   11896 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:22:19.352191   11896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:22:19.362359   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:19.484977   11896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:22:19.579332   11896 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:22:19.579411   11896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:22:19.584157   11896 start.go:563] Will wait 60s for crictl version
	I0923 10:22:19.584218   11896 ssh_runner.go:195] Run: which crictl
	I0923 10:22:19.587946   11896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:22:19.628720   11896 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:22:19.628857   11896 ssh_runner.go:195] Run: crio --version
	I0923 10:22:19.657600   11896 ssh_runner.go:195] Run: crio --version
	I0923 10:22:19.690821   11896 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:22:19.692029   11896 main.go:141] libmachine: (addons-230451) Calling .GetIP
	I0923 10:22:19.694415   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:19.694719   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:19.694755   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:19.694901   11896 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:22:19.698798   11896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:19.711452   11896 kubeadm.go:883] updating cluster {Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:22:19.711550   11896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:19.711592   11896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:19.747339   11896 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 10:22:19.747410   11896 ssh_runner.go:195] Run: which lz4
	I0923 10:22:19.751336   11896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 10:22:19.755656   11896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 10:22:19.755687   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 10:22:21.047377   11896 crio.go:462] duration metric: took 1.296092639s to copy over tarball
	I0923 10:22:21.047452   11896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 10:22:23.149022   11896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.101536224s)
	I0923 10:22:23.149063   11896 crio.go:469] duration metric: took 2.101658311s to extract the tarball
	I0923 10:22:23.149074   11896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 10:22:23.186090   11896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:23.231874   11896 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:23.231895   11896 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:22:23.231902   11896 kubeadm.go:934] updating node { 192.168.39.142 8443 v1.31.1 crio true true} ...
	I0923 10:22:23.231987   11896 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-230451 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:22:23.232047   11896 ssh_runner.go:195] Run: crio config
	I0923 10:22:23.284759   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:22:23.284784   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:22:23.284800   11896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:22:23.284832   11896 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-230451 NodeName:addons-230451 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:22:23.284967   11896 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-230451"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:22:23.285038   11896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:22:23.294894   11896 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:22:23.294968   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:22:23.304559   11896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 10:22:23.321682   11896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:22:23.338467   11896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 10:22:23.355102   11896 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0923 10:22:23.359077   11896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:23.371614   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:23.497716   11896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:23.524962   11896 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451 for IP: 192.168.39.142
	I0923 10:22:23.524985   11896 certs.go:194] generating shared ca certs ...
	I0923 10:22:23.525001   11896 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.525125   11896 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:22:23.653794   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt ...
	I0923 10:22:23.653826   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt: {Name:mk0d92c2a9963fcf15ffb070721c588192e7736e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.653986   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key ...
	I0923 10:22:23.653996   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key: {Name:mkeb4e4ef8ef3c516f46598d48867c8293e2d97b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.654085   11896 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:22:23.786686   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt ...
	I0923 10:22:23.786718   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt: {Name:mk4094838d6b10d87fe353fc7ecb8f6c0f591232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.786881   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key ...
	I0923 10:22:23.786892   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key: {Name:mkae41c92d5aff93d9eaa4a90706202e465fd08d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:23.786960   11896 certs.go:256] generating profile certs ...
	I0923 10:22:23.787011   11896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key
	I0923 10:22:23.787024   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt with IP's: []
	I0923 10:22:24.040672   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt ...
	I0923 10:22:24.040705   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: {Name:mk12ca8a37f255852c15957acdaaac5803f6db08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.040873   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key ...
	I0923 10:22:24.040883   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.key: {Name:mk5ec5d734cc6123b964d4a8aa27ee9625037ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.040949   11896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89
	I0923 10:22:24.040966   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.142]
	I0923 10:22:24.248598   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 ...
	I0923 10:22:24.248628   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89: {Name:mk9332743467473c4d78e8a673a2ddc310d8086b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.248782   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89 ...
	I0923 10:22:24.248794   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89: {Name:mk563d416f16b853b493dbf6317b9fb699d8141e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.248878   11896 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt.6c2cdf89 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt
	I0923 10:22:24.248949   11896 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key.6c2cdf89 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key
	I0923 10:22:24.248994   11896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key
	I0923 10:22:24.249010   11896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt with IP's: []
	I0923 10:22:24.333105   11896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt ...
	I0923 10:22:24.333135   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt: {Name:mk1c36ccdfe89e6949c41221860582d71d9abecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.333299   11896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key ...
	I0923 10:22:24.333309   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key: {Name:mk001f630ca2a3ebb6948b9fe6cbe0a137191074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:24.333516   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:22:24.333586   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:22:24.333624   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:22:24.333649   11896 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:22:24.334174   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:22:24.364904   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:22:24.389692   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:22:24.413480   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:22:24.437332   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:22:24.463620   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:22:24.489652   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:22:24.515979   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:22:24.542229   11896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:22:24.568853   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:22:24.589287   11896 ssh_runner.go:195] Run: openssl version
	I0923 10:22:24.596782   11896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:22:24.607940   11896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.612566   11896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.612615   11896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:24.618835   11896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:22:24.629990   11896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:22:24.634389   11896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:22:24.634449   11896 kubeadm.go:392] StartCluster: {Name:addons-230451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-230451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:22:24.634545   11896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:22:24.634624   11896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:22:24.674296   11896 cri.go:89] found id: ""
	I0923 10:22:24.674376   11896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:22:24.684623   11896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:22:24.695036   11896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:22:24.707226   11896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:22:24.707249   11896 kubeadm.go:157] found existing configuration files:
	
	I0923 10:22:24.707293   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:22:24.716855   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:22:24.716917   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:22:24.727043   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:22:24.736874   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:22:24.736946   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:22:24.746697   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:22:24.756313   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:22:24.756377   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:22:24.766227   11896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:22:24.775698   11896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:22:24.775768   11896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:22:24.786611   11896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:22:24.838767   11896 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:22:24.838821   11896 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:22:24.940902   11896 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:22:24.941087   11896 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:22:24.941212   11896 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:22:24.948875   11896 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:22:25.257696   11896 out.go:235]   - Generating certificates and keys ...
	I0923 10:22:25.257801   11896 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:22:25.257881   11896 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:22:25.257985   11896 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:22:25.258096   11896 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:22:25.363288   11896 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:22:25.425568   11896 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:22:25.496334   11896 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:22:25.496516   11896 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-230451 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0923 10:22:25.661761   11896 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:22:25.661907   11896 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-230451 localhost] and IPs [192.168.39.142 127.0.0.1 ::1]
	I0923 10:22:25.727123   11896 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:22:25.906579   11896 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:22:25.974535   11896 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:22:25.974623   11896 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:22:26.123945   11896 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:22:26.269690   11896 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:22:26.518592   11896 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:22:26.597902   11896 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:22:26.831627   11896 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:22:26.832272   11896 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:22:26.836780   11896 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:22:26.838584   11896 out.go:235]   - Booting up control plane ...
	I0923 10:22:26.838682   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:22:26.838755   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:22:26.839231   11896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:22:26.853944   11896 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:22:26.861028   11896 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:22:26.861120   11896 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:22:26.983148   11896 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:22:26.983286   11896 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:22:27.483290   11896 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.847264ms
	I0923 10:22:27.483400   11896 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:22:32.981821   11896 kubeadm.go:310] [api-check] The API server is healthy after 5.502127762s
	I0923 10:22:32.994814   11896 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:22:33.013765   11896 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:22:33.046425   11896 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:22:33.046697   11896 kubeadm.go:310] [mark-control-plane] Marking the node addons-230451 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:22:33.059414   11896 kubeadm.go:310] [bootstrap-token] Using token: 2hvssy.27mbk5fz3uxysew6
	I0923 10:22:33.060728   11896 out.go:235]   - Configuring RBAC rules ...
	I0923 10:22:33.060856   11896 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:22:33.066668   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:22:33.078485   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:22:33.081626   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:22:33.087430   11896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:22:33.091457   11896 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:22:33.390136   11896 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:22:33.813952   11896 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:22:34.387868   11896 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:22:34.388882   11896 kubeadm.go:310] 
	I0923 10:22:34.388988   11896 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:22:34.388998   11896 kubeadm.go:310] 
	I0923 10:22:34.389127   11896 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:22:34.389143   11896 kubeadm.go:310] 
	I0923 10:22:34.389170   11896 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:22:34.389244   11896 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:22:34.389326   11896 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:22:34.389341   11896 kubeadm.go:310] 
	I0923 10:22:34.389420   11896 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:22:34.389431   11896 kubeadm.go:310] 
	I0923 10:22:34.389498   11896 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:22:34.389516   11896 kubeadm.go:310] 
	I0923 10:22:34.389562   11896 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:22:34.389676   11896 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:22:34.389782   11896 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:22:34.389792   11896 kubeadm.go:310] 
	I0923 10:22:34.389900   11896 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:22:34.389993   11896 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:22:34.390002   11896 kubeadm.go:310] 
	I0923 10:22:34.390104   11896 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2hvssy.27mbk5fz3uxysew6 \
	I0923 10:22:34.390230   11896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 \
	I0923 10:22:34.390260   11896 kubeadm.go:310] 	--control-plane 
	I0923 10:22:34.390266   11896 kubeadm.go:310] 
	I0923 10:22:34.390390   11896 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:22:34.390400   11896 kubeadm.go:310] 
	I0923 10:22:34.390516   11896 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2hvssy.27mbk5fz3uxysew6 \
	I0923 10:22:34.390643   11896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 
	I0923 10:22:34.391299   11896 kubeadm.go:310] W0923 10:22:24.818359     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:34.391630   11896 kubeadm.go:310] W0923 10:22:24.819029     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:34.391761   11896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:22:34.391794   11896 cni.go:84] Creating CNI manager for ""
	I0923 10:22:34.391806   11896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:22:34.393547   11896 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:22:34.394830   11896 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:22:34.412319   11896 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:22:34.431070   11896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:22:34.431130   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:34.431136   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-230451 minikube.k8s.io/updated_at=2024_09_23T10_22_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-230451 minikube.k8s.io/primary=true
	I0923 10:22:34.546608   11896 ops.go:34] apiserver oom_adj: -16
	I0923 10:22:34.546625   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:35.047328   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:35.546823   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:36.046794   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:36.547056   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:37.046889   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:37.547633   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:38.046761   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:38.547665   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:39.047581   11896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:39.133362   11896 kubeadm.go:1113] duration metric: took 4.702301784s to wait for elevateKubeSystemPrivileges
	I0923 10:22:39.133409   11896 kubeadm.go:394] duration metric: took 14.498964743s to StartCluster
	I0923 10:22:39.133426   11896 settings.go:142] acquiring lock: {Name:mka0fc37129eef8f35af2c1a6ddc567156410b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:39.133569   11896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:22:39.133997   11896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/kubeconfig: {Name:mk40a9897a5577a89be748f874c2066abd769fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:39.134254   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:22:39.134262   11896 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:22:39.134340   11896 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:22:39.134490   11896 addons.go:69] Setting yakd=true in profile "addons-230451"
	I0923 10:22:39.134508   11896 addons.go:234] Setting addon yakd=true in "addons-230451"
	I0923 10:22:39.134521   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:39.134537   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134577   11896 addons.go:69] Setting inspektor-gadget=true in profile "addons-230451"
	I0923 10:22:39.134590   11896 addons.go:234] Setting addon inspektor-gadget=true in "addons-230451"
	I0923 10:22:39.134616   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134702   11896 addons.go:69] Setting storage-provisioner=true in profile "addons-230451"
	I0923 10:22:39.134726   11896 addons.go:234] Setting addon storage-provisioner=true in "addons-230451"
	I0923 10:22:39.134749   11896 addons.go:69] Setting registry=true in profile "addons-230451"
	I0923 10:22:39.135058   11896 addons.go:234] Setting addon registry=true in "addons-230451"
	I0923 10:22:39.135093   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134729   11896 addons.go:69] Setting cloud-spanner=true in profile "addons-230451"
	I0923 10:22:39.134732   11896 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-230451"
	I0923 10:22:39.135178   11896 addons.go:69] Setting volcano=true in profile "addons-230451"
	I0923 10:22:39.135163   11896 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-230451"
	I0923 10:22:39.135195   11896 addons.go:234] Setting addon volcano=true in "addons-230451"
	I0923 10:22:39.135209   11896 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-230451"
	I0923 10:22:39.135225   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135226   11896 addons.go:69] Setting volumesnapshots=true in profile "addons-230451"
	I0923 10:22:39.135243   11896 addons.go:234] Setting addon volumesnapshots=true in "addons-230451"
	I0923 10:22:39.135269   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.134757   11896 addons.go:69] Setting metrics-server=true in profile "addons-230451"
	I0923 10:22:39.135294   11896 addons.go:234] Setting addon metrics-server=true in "addons-230451"
	I0923 10:22:39.135313   11896 addons.go:234] Setting addon cloud-spanner=true in "addons-230451"
	I0923 10:22:39.135037   11896 addons.go:69] Setting default-storageclass=true in profile "addons-230451"
	I0923 10:22:39.135326   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135334   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135346   11896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-230451"
	I0923 10:22:39.135361   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135745   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.135322   11896 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-230451"
	I0923 10:22:39.135770   11896 addons.go:69] Setting ingress-dns=true in profile "addons-230451"
	I0923 10:22:39.135775   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.135782   11896 addons.go:234] Setting addon ingress-dns=true in "addons-230451"
	I0923 10:22:39.135791   11896 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-230451"
	I0923 10:22:39.135814   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135811   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.135827   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.135864   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.136234   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.136268   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.136281   11896 addons.go:69] Setting gcp-auth=true in profile "addons-230451"
	I0923 10:22:39.136303   11896 mustload.go:65] Loading cluster: addons-230451
	I0923 10:22:39.136368   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.136406   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.134746   11896 addons.go:69] Setting ingress=true in profile "addons-230451"
	I0923 10:22:39.136467   11896 addons.go:234] Setting addon ingress=true in "addons-230451"
	I0923 10:22:39.136921   11896 config.go:182] Loaded profile config "addons-230451": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:39.137052   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137087   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137214   11896 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-230451"
	I0923 10:22:39.137372   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137507   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137538   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137549   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.137614   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.137976   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.137511   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.138578   11896 out.go:177] * Verifying Kubernetes components...
	I0923 10:22:39.139899   11896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:39.145488   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145585   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145613   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145654   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145676   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145800   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145841   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145871   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145891   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145914   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145918   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.145952   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.145983   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.161544   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0923 10:22:39.161884   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0923 10:22:39.162070   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0923 10:22:39.162264   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.162826   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.162851   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.162936   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.163040   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.163434   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.163454   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.163580   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0923 10:22:39.163764   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.163788   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.163840   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.163934   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I0923 10:22:39.164104   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.164684   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.164721   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.185510   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I0923 10:22:39.185571   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.185662   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.185706   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.185909   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.185926   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.186778   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.186932   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.186951   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.187346   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187387   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.187436   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187463   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.187522   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.187703   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.187731   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.192887   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.193023   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.201290   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.201305   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.201348   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.201820   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.201838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.201956   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.201993   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.202335   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.229941   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I0923 10:22:39.229953   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0923 10:22:39.229981   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0923 10:22:39.230081   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32827
	I0923 10:22:39.229945   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43993
	I0923 10:22:39.230091   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0923 10:22:39.230158   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0923 10:22:39.230232   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0923 10:22:39.230239   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0923 10:22:39.230393   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.230446   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.231158   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231163   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231251   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231315   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231351   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231380   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231777   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0923 10:22:39.231833   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.231847   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.231916   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.231949   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.232175   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232191   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232195   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232209   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232317   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232328   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232431   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232446   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232586   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232645   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232647   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232657   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232731   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232765   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232769   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.232778   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232780   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.232793   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.232834   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.233524   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233547   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233528   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233605   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.233669   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.233682   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.233731   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.233898   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.233933   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.233988   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234016   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234116   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.234147   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234176   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234491   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.234491   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234526   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.234552   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.234889   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.234926   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.235293   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.235441   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.236819   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.236838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.237864   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.238168   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.238717   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.240479   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.240843   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.240799   11896 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-230451"
	I0923 10:22:39.240943   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.241475   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:39.241513   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:39.241572   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.241620   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.241673   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:39.241694   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:39.241712   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:39.241728   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:39.241939   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:39.241966   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:39.241981   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 10:22:39.242061   11896 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 10:22:39.242209   11896 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:22:39.243364   11896 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:39.243382   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:22:39.243400   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.243621   11896 addons.go:234] Setting addon default-storageclass=true in "addons-230451"
	I0923 10:22:39.243659   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:39.244006   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.244048   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.245011   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:22:39.245411   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0923 10:22:39.245745   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.246261   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.246280   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.246342   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.246653   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.246702   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.246763   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.246918   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.247079   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.247234   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.247287   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.247413   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.248325   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:22:39.249556   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:22:39.250623   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:22:39.251623   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:22:39.252410   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I0923 10:22:39.252964   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.253331   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.253997   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:22:39.254684   11896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:22:39.255992   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.256016   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.256228   11896 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:39.256248   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:22:39.256266   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.256781   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:22:39.257114   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.258716   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:22:39.259215   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.259570   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.259591   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.259735   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:22:39.259749   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:22:39.259767   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.259814   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.259944   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.260065   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.260176   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.262079   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0923 10:22:39.262584   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.262683   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.263031   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.263060   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.263202   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.263213   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.263419   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.263572   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.263624   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.264175   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.264214   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.264455   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.264597   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.265940   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.265968   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.271246   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38035
	I0923 10:22:39.271789   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.272388   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.272405   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.272805   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.273028   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.274894   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0923 10:22:39.275213   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.275844   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.275867   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.276203   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.278018   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42367
	I0923 10:22:39.278347   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0923 10:22:39.278503   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.278767   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0923 10:22:39.278898   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.279182   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.279681   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.279702   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.279763   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.280273   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.280289   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.280330   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.280582   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.280689   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.280918   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.281367   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.281152   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0923 10:22:39.281714   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.281734   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.281796   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:39.281834   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:39.282057   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.282159   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.282388   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.282544   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.282560   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.282678   11896 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:22:39.283012   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.283243   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.283634   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:22:39.283650   11896 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:22:39.283668   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.283893   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.285400   11896 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:22:39.286497   11896 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:22:39.286503   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.286515   11896 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:22:39.286544   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.286846   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.286869   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.287301   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.287493   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.287665   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.287806   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.288302   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.288696   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0923 10:22:39.289083   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.289683   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.289701   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.290084   11896 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:22:39.290241   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.290292   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.290473   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.290735   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.290773   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.290925   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.291070   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.291212   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.291343   11896 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:39.291363   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:22:39.291378   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.291451   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.295024   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.295024   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.295085   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.295103   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.295534   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.295687   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.295814   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.297105   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.297670   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0923 10:22:39.297670   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0923 10:22:39.298051   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.298086   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.298472   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.298495   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
	I0923 10:22:39.298498   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.298662   11896 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:22:39.298748   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.298766   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.298991   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.299054   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.299408   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.299577   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300091   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300214   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.300223   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.300609   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.300821   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.300911   11896 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:22:39.301783   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.301909   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.301978   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I0923 10:22:39.302139   11896 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:22:39.302152   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:22:39.302178   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.302381   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.302852   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.302875   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.302984   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.303301   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.303431   11896 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:22:39.303515   11896 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:22:39.303574   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.304688   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:22:39.304717   11896 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:22:39.304740   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.304744   11896 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:39.304807   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:22:39.304819   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.305822   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:39.307556   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0923 10:22:39.307586   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.307720   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:22:39.307774   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.308043   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.308066   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.308423   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.308972   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.309094   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.309127   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.308530   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.309353   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.309801   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.309838   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.310129   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.310151   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.310205   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:39.310257   11896 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:22:39.310305   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.310367   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.310501   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.310551   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.310650   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.310779   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.311023   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.311548   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.311571   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.311666   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.311778   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:22:39.311805   11896 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:22:39.311825   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.311915   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.312185   11896 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:39.312202   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:22:39.312219   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.312343   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I0923 10:22:39.312499   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.312659   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.312900   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.312942   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:39.313158   11896 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:39.313227   11896 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:22:39.313245   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.313364   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:39.313398   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:39.313741   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:39.313923   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:39.315763   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:39.315810   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.316253   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.316283   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.316514   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.316662   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.316765   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.316924   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.317045   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.317358   11896 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:22:39.317533   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.317571   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.317710   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.317848   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.317973   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.318106   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.318191   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.318580   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.318598   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.318878   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.319048   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.319206   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.319289   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:39.320204   11896 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:22:39.321465   11896 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:39.321479   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:22:39.321491   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:39.323996   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.324361   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:39.324386   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:39.324495   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:39.324602   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:39.324711   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:39.324788   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	W0923 10:22:39.325511   11896 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50144->192.168.39.142:22: read: connection reset by peer
	I0923 10:22:39.325542   11896 retry.go:31] will retry after 146.678947ms: ssh: handshake failed: read tcp 192.168.39.1:50144->192.168.39.142:22: read: connection reset by peer
	I0923 10:22:39.557159   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:39.580915   11896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:22:39.580948   11896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:39.596569   11896 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:22:39.596596   11896 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:22:39.610676   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:39.621265   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:39.641318   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:39.653920   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:39.688552   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:22:39.688582   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:22:39.695267   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:22:39.695299   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:22:39.700872   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:39.701278   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:22:39.701293   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:22:39.730612   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:22:39.730640   11896 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:22:39.741177   11896 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:22:39.741202   11896 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:22:39.775359   11896 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:22:39.775388   11896 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:22:39.777672   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:39.829748   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:22:39.829779   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:22:39.845681   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:22:39.845709   11896 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:22:39.868956   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:22:39.868979   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:22:39.878049   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:22:39.878072   11896 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:22:39.910637   11896 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:22:39.910662   11896 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:22:39.925074   11896 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:39.925100   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:22:39.964060   11896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:22:39.964082   11896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:22:40.059843   11896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:40.059864   11896 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:22:40.073448   11896 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:22:40.073471   11896 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:22:40.094580   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:22:40.094602   11896 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:22:40.102412   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:22:40.102434   11896 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:22:40.111856   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:22:40.111870   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:22:40.149555   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:40.244365   11896 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:22:40.244393   11896 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:22:40.286452   11896 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:40.286479   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:22:40.301058   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:40.319790   11896 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:40.319818   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:22:40.395452   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:22:40.395478   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:22:40.420594   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:40.465580   11896 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:22:40.465611   11896 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:22:40.517028   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:40.586224   11896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:22:40.586264   11896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:22:40.716640   11896 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:40.716667   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:22:40.864786   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:22:40.864809   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:22:40.974629   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:41.329483   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:22:41.329520   11896 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:22:41.615715   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:22:41.615746   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:22:41.850585   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:22:41.850616   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:22:42.139510   11896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:42.139536   11896 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:22:42.203522   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.646323739s)
	I0923 10:22:42.203571   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.203579   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.203637   11896 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.62266543s)
	I0923 10:22:42.203652   11896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.622706839s)
	I0923 10:22:42.203673   11896 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 10:22:42.203984   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.204037   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.204051   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.204059   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.204072   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.204292   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.204308   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.204357   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.204648   11896 node_ready.go:35] waiting up to 6m0s for node "addons-230451" to be "Ready" ...
	I0923 10:22:42.265962   11896 node_ready.go:49] node "addons-230451" has status "Ready":"True"
	I0923 10:22:42.265985   11896 node_ready.go:38] duration metric: took 61.313529ms for node "addons-230451" to be "Ready" ...
	I0923 10:22:42.265995   11896 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:22:42.382117   11896 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace to be "Ready" ...
	I0923 10:22:42.433215   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:42.639353   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.028639151s)
	I0923 10:22:42.639403   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639414   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639437   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.018135683s)
	I0923 10:22:42.639481   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639496   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639513   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.99816104s)
	I0923 10:22:42.639574   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639591   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639699   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.639710   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.639718   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639731   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.639808   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.639885   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:42.639923   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.639930   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.639937   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.639944   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.640007   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.640014   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.640168   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.640182   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.641237   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.641246   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.641258   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.641266   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.641730   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.641744   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:42.815687   11896 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-230451" context rescaled to 1 replicas
	I0923 10:22:42.853390   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:42.853416   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:42.853662   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:42.853720   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:44.448550   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:46.283789   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:22:46.283834   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:46.286793   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.287202   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:46.287227   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.287394   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:46.287553   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:46.287738   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:46.287873   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:46.555575   11896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:22:46.623519   11896 addons.go:234] Setting addon gcp-auth=true in "addons-230451"
	I0923 10:22:46.623584   11896 host.go:66] Checking if "addons-230451" exists ...
	I0923 10:22:46.624001   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:46.624048   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:46.639512   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0923 10:22:46.639966   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:46.640495   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:46.640515   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:46.640853   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:46.641315   11896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:22:46.641348   11896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:22:46.656710   11896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0923 10:22:46.657190   11896 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:22:46.657684   11896 main.go:141] libmachine: Using API Version  1
	I0923 10:22:46.657706   11896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:22:46.658044   11896 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:22:46.658273   11896 main.go:141] libmachine: (addons-230451) Calling .GetState
	I0923 10:22:46.659892   11896 main.go:141] libmachine: (addons-230451) Calling .DriverName
	I0923 10:22:46.660080   11896 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:22:46.660106   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHHostname
	I0923 10:22:46.662909   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.663305   11896 main.go:141] libmachine: (addons-230451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:36", ip: ""} in network mk-addons-230451: {Iface:virbr1 ExpiryTime:2024-09-23 11:22:10 +0000 UTC Type:0 Mac:52:54:00:23:7b:36 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:addons-230451 Clientid:01:52:54:00:23:7b:36}
	I0923 10:22:46.663330   11896 main.go:141] libmachine: (addons-230451) DBG | domain addons-230451 has defined IP address 192.168.39.142 and MAC address 52:54:00:23:7b:36 in network mk-addons-230451
	I0923 10:22:46.663560   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHPort
	I0923 10:22:46.663699   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHKeyPath
	I0923 10:22:46.663835   11896 main.go:141] libmachine: (addons-230451) Calling .GetSSHUsername
	I0923 10:22:46.663965   11896 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/addons-230451/id_rsa Username:docker}
	I0923 10:22:47.013493   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:47.307143   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.606234939s)
	I0923 10:22:47.307203   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307215   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307214   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.5295194s)
	I0923 10:22:47.307233   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307245   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307246   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.653288375s)
	I0923 10:22:47.307261   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.157672592s)
	I0923 10:22:47.307296   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307296   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307316   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307318   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307367   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.006265482s)
	I0923 10:22:47.307413   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307416   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.886776853s)
	I0923 10:22:47.307425   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307441   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307452   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307512   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.790448754s)
	W0923 10:22:47.307537   11896 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:47.307568   11896 retry.go:31] will retry after 312.840585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:47.307652   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.332993076s)
	I0923 10:22:47.307672   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307694   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307874   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.307912   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.307930   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307936   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307954   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.307957   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.307963   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307966   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.307973   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307977   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.307984   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.307941   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308023   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308030   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308072   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308075   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308102   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308105   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.308114   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308121   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308128   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308132   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.308135   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308138   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308142   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308145   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308165   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.308177   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.308185   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.308191   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.309012   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309037   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309044   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309052   11896 addons.go:475] Verifying addon registry=true in "addons-230451"
	I0923 10:22:47.309241   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309250   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309257   11896 addons.go:475] Verifying addon metrics-server=true in "addons-230451"
	I0923 10:22:47.309419   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309453   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309460   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309479   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309499   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.309736   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.309772   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.309779   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.310028   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.310059   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.310066   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.311116   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.311130   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.311151   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.311171   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.312036   11896 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-230451 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:22:47.312654   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.312668   11896 out.go:177] * Verifying registry addon...
	I0923 10:22:47.312738   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.312748   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.312802   11896 addons.go:475] Verifying addon ingress=true in "addons-230451"
	I0923 10:22:47.313891   11896 out.go:177] * Verifying ingress addon...
	I0923 10:22:47.314808   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:22:47.315984   11896 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:22:47.333135   11896 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:22:47.333156   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.333672   11896 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:22:47.333694   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.362191   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.362210   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.362500   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.362519   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.620787   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:47.853958   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.854430   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.976575   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.543318151s)
	I0923 10:22:47.976615   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.976627   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.976662   11896 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.31655795s)
	I0923 10:22:47.976916   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.976936   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.976944   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:47.976951   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:47.977493   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:47.977493   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:47.977516   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:47.977530   11896 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-230451"
	I0923 10:22:47.978353   11896 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:22:47.979244   11896 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:22:47.980816   11896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:47.981547   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:22:47.981951   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:22:47.981965   11896 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:22:48.012863   11896 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:22:48.012883   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.081072   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:22:48.081094   11896 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:22:48.235021   11896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:48.235041   11896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:22:48.323476   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.325316   11896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:48.329262   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.487988   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.823283   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.823712   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.987157   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.319059   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.320824   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.394285   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:49.486336   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.828379   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.845245   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.018644   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.230146   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.609312903s)
	I0923 10:22:50.230207   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230224   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230234   11896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.904884388s)
	I0923 10:22:50.230272   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230290   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230489   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230525   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230539   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230546   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230590   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230616   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230653   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230664   11896 main.go:141] libmachine: Making call to close driver server
	I0923 10:22:50.230671   11896 main.go:141] libmachine: (addons-230451) Calling .Close
	I0923 10:22:50.230801   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230830   11896 main.go:141] libmachine: (addons-230451) DBG | Closing plugin on server side
	I0923 10:22:50.230834   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230842   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.230852   11896 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:22:50.230861   11896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:22:50.232850   11896 addons.go:475] Verifying addon gcp-auth=true in "addons-230451"
	I0923 10:22:50.234749   11896 out.go:177] * Verifying gcp-auth addon...
	I0923 10:22:50.236715   11896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:22:50.240230   11896 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:22:50.240245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.341082   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.341419   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.485879   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.741139   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.819391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.822087   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.987076   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.240553   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.318867   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.320884   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.487367   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.740284   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.818704   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.821561   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.888695   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:51.986219   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.241303   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.320629   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.321209   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.486705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.740428   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.819857   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.820725   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.986468   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.241277   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.318492   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.320484   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.520510   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.969717   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.974986   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.975544   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.977863   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:53.986625   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.240759   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.320774   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.321373   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.486278   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.740966   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.819228   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.822185   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.986658   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.240365   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.318431   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.320427   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.486106   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.740761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.823261   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.825324   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.989815   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.241561   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.320639   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.320643   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.388229   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:56.487473   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.740723   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.819638   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.821374   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.986618   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.241599   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.319347   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.320708   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.486908   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.740748   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.820700   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.820754   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.987523   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.239942   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.319913   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.320838   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.389727   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:22:58.488040   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.741176   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.818677   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.819952   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.986499   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.240344   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.319170   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.321183   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.486469   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.740550   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.819952   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.823020   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.986806   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.240835   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.319990   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.321306   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.486611   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.740067   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.820118   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.821668   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.889293   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:00.986752   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.240810   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.321217   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.321511   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.486551   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.741019   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.819706   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.820249   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.986133   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.240968   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.319524   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.322199   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.493692   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.740885   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.819358   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.821237   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.224620   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.337753   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.338071   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.338115   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.387890   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:03.485468   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.739963   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.820105   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.820454   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.986601   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.240576   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.321031   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.321397   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.485628   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.007814   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.008134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.008442   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.011226   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.260975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.320236   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.321513   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.389023   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:05.487041   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.740227   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.818341   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.819725   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.986304   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.240486   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.318856   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.321629   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.486680   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.740290   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.820149   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.820293   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.986074   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.240910   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.319345   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.320504   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.485787   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.740373   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.820179   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.821686   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.888632   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:07.986582   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.239642   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.319453   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.321440   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.486021   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.741278   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.818653   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.820061   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.987104   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.242250   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.319190   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.320606   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.487395   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.740299   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.818478   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.820810   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.985704   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.240100   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.318707   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.320481   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.391013   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:10.486242   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.740836   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.819488   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.820601   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.986709   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.241401   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.318575   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.320781   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.486517   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.740599   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.819000   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.820650   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.985664   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.241013   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.320039   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.320366   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.486654   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.740430   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.819149   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.821095   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.887785   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:12.986107   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.241268   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.318846   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.320609   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.486601   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.740348   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.819265   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.820668   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.986922   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.240485   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.320070   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:14.320544   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.910906   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.923120   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:15.012269   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.012603   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.012605   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.013481   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.241391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.342450   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.342933   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.487968   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.741013   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.819807   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.820519   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.986818   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.240849   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.318613   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.319887   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.486621   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.741530   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.818963   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.820103   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.986250   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.241331   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.318639   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.319759   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.388335   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:17.486169   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.740440   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.818651   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.820082   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.986722   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.240851   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.319266   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.321957   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.486827   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.749479   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.818898   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.819965   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.986655   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.353395   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.353455   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.353980   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.388491   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:19.486286   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.740811   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.819265   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.821465   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.987794   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.241615   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.343341   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.345086   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.485876   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.741706   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.822445   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.822885   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.986251   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.241243   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.342973   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.343648   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.388636   11896 pod_ready.go:103] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:21.486389   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.741586   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.820057   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.820872   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.986245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.240821   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.321008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.321506   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.487367   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.746761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.845229   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.845516   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.889257   11896 pod_ready.go:93] pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.889286   11896 pod_ready.go:82] duration metric: took 40.507126685s for pod "coredns-7c65d6cfc9-7mfbw" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.889299   11896 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.891229   11896 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kvrjl" not found
	I0923 10:23:22.891254   11896 pod_ready.go:82] duration metric: took 1.946573ms for pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace to be "Ready" ...
	E0923 10:23:22.891266   11896 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-kvrjl" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kvrjl" not found
	I0923 10:23:22.891274   11896 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.899549   11896 pod_ready.go:93] pod "etcd-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.899575   11896 pod_ready.go:82] duration metric: took 8.292332ms for pod "etcd-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.899586   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.906049   11896 pod_ready.go:93] pod "kube-apiserver-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.906074   11896 pod_ready.go:82] duration metric: took 6.480206ms for pod "kube-apiserver-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.906086   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.910833   11896 pod_ready.go:93] pod "kube-controller-manager-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:22.910859   11896 pod_ready.go:82] duration metric: took 4.764833ms for pod "kube-controller-manager-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.910872   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2f5tn" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:22.986668   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.089873   11896 pod_ready.go:93] pod "kube-proxy-2f5tn" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.089900   11896 pod_ready.go:82] duration metric: took 179.019892ms for pod "kube-proxy-2f5tn" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.089912   11896 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.241038   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.320388   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.322190   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.486569   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.487599   11896 pod_ready.go:93] pod "kube-scheduler-addons-230451" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.487631   11896 pod_ready.go:82] duration metric: took 397.7086ms for pod "kube-scheduler-addons-230451" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.487644   11896 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.740324   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.818859   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.819999   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.886465   11896 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:23.886497   11896 pod_ready.go:82] duration metric: took 398.839138ms for pod "nvidia-device-plugin-daemonset-t2lzg" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:23.886507   11896 pod_ready.go:39] duration metric: took 41.620501569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:23:23.886523   11896 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:23:23.886570   11896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:23:23.914996   11896 api_server.go:72] duration metric: took 44.780704115s to wait for apiserver process to appear ...
	I0923 10:23:23.915024   11896 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:23:23.915046   11896 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0923 10:23:23.920072   11896 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0923 10:23:23.921132   11896 api_server.go:141] control plane version: v1.31.1
	I0923 10:23:23.921159   11896 api_server.go:131] duration metric: took 6.126816ms to wait for apiserver health ...
	I0923 10:23:23.921169   11896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:23:24.437367   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.437846   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.438079   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.438323   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.442864   11896 system_pods.go:59] 17 kube-system pods found
	I0923 10:23:24.442893   11896 system_pods.go:61] "coredns-7c65d6cfc9-7mfbw" [04d690db-b3f4-4949-ba3f-7bd3a74f4eb6] Running
	I0923 10:23:24.442904   11896 system_pods.go:61] "csi-hostpath-attacher-0" [215bba0a-54bf-45ec-a6cd-92f89ad62dac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:23:24.442914   11896 system_pods.go:61] "csi-hostpath-resizer-0" [651d7af5-c66c-4a47-a274-97f99744e66e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:23:24.442930   11896 system_pods.go:61] "csi-hostpathplugin-8mdng" [e1e36834-e18e-4390-bb18-a360cde6394c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:23:24.442939   11896 system_pods.go:61] "etcd-addons-230451" [0e8cdf9c-cbce-459d-be1e-613c2a79cb79] Running
	I0923 10:23:24.442949   11896 system_pods.go:61] "kube-apiserver-addons-230451" [7916049b-c9ce-4de7-a7bc-4faa37c8ee80] Running
	I0923 10:23:24.442954   11896 system_pods.go:61] "kube-controller-manager-addons-230451" [68366320-29aa-47d0-a8d1-64cf99d3c206] Running
	I0923 10:23:24.442963   11896 system_pods.go:61] "kube-ingress-dns-minikube" [c962d61b-b651-40b4-b128-49b4f1966a46] Running
	I0923 10:23:24.442968   11896 system_pods.go:61] "kube-proxy-2f5tn" [ecde87e2-ab31-4b8b-9c74-67efa7870d45] Running
	I0923 10:23:24.442976   11896 system_pods.go:61] "kube-scheduler-addons-230451" [faeada60-3597-4fa5-bf52-c211a79bad29] Running
	I0923 10:23:24.442985   11896 system_pods.go:61] "metrics-server-84c5f94fbc-vx2z2" [e950a717-9855-4b25-82a8-ac71b9a3a180] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:23:24.442993   11896 system_pods.go:61] "nvidia-device-plugin-daemonset-t2lzg" [6608f635-89c8-4811-9dca-ae138dbe1bd9] Running
	I0923 10:23:24.443002   11896 system_pods.go:61] "registry-66c9cd494c-7z2xv" [71f47a69-a374-4586-8d8b-0ec84aeee203] Running
	I0923 10:23:24.443009   11896 system_pods.go:61] "registry-proxy-kwn7c" [fab26ceb-8538-4146-9f14-955f715b3dd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:23:24.443020   11896 system_pods.go:61] "snapshot-controller-56fcc65765-mtclj" [4d040c25-f747-448f-81e3-46dd810a9b80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.443030   11896 system_pods.go:61] "snapshot-controller-56fcc65765-zc5h7" [a8f9592b-9ae4-4ef5-aaeb-a421f92692bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.443039   11896 system_pods.go:61] "storage-provisioner" [c2bd96dc-bf5a-4a77-83f4-de923c76367f] Running
	I0923 10:23:24.443049   11896 system_pods.go:74] duration metric: took 521.872993ms to wait for pod list to return data ...
	I0923 10:23:24.443060   11896 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:23:24.445709   11896 default_sa.go:45] found service account: "default"
	I0923 10:23:24.445725   11896 default_sa.go:55] duration metric: took 2.659813ms for default service account to be created ...
	I0923 10:23:24.445731   11896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:23:24.486762   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.493551   11896 system_pods.go:86] 17 kube-system pods found
	I0923 10:23:24.493583   11896 system_pods.go:89] "coredns-7c65d6cfc9-7mfbw" [04d690db-b3f4-4949-ba3f-7bd3a74f4eb6] Running
	I0923 10:23:24.493595   11896 system_pods.go:89] "csi-hostpath-attacher-0" [215bba0a-54bf-45ec-a6cd-92f89ad62dac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:23:24.493604   11896 system_pods.go:89] "csi-hostpath-resizer-0" [651d7af5-c66c-4a47-a274-97f99744e66e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:23:24.493618   11896 system_pods.go:89] "csi-hostpathplugin-8mdng" [e1e36834-e18e-4390-bb18-a360cde6394c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:23:24.493625   11896 system_pods.go:89] "etcd-addons-230451" [0e8cdf9c-cbce-459d-be1e-613c2a79cb79] Running
	I0923 10:23:24.493633   11896 system_pods.go:89] "kube-apiserver-addons-230451" [7916049b-c9ce-4de7-a7bc-4faa37c8ee80] Running
	I0923 10:23:24.493642   11896 system_pods.go:89] "kube-controller-manager-addons-230451" [68366320-29aa-47d0-a8d1-64cf99d3c206] Running
	I0923 10:23:24.493650   11896 system_pods.go:89] "kube-ingress-dns-minikube" [c962d61b-b651-40b4-b128-49b4f1966a46] Running
	I0923 10:23:24.493658   11896 system_pods.go:89] "kube-proxy-2f5tn" [ecde87e2-ab31-4b8b-9c74-67efa7870d45] Running
	I0923 10:23:24.493666   11896 system_pods.go:89] "kube-scheduler-addons-230451" [faeada60-3597-4fa5-bf52-c211a79bad29] Running
	I0923 10:23:24.493677   11896 system_pods.go:89] "metrics-server-84c5f94fbc-vx2z2" [e950a717-9855-4b25-82a8-ac71b9a3a180] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:23:24.493685   11896 system_pods.go:89] "nvidia-device-plugin-daemonset-t2lzg" [6608f635-89c8-4811-9dca-ae138dbe1bd9] Running
	I0923 10:23:24.493693   11896 system_pods.go:89] "registry-66c9cd494c-7z2xv" [71f47a69-a374-4586-8d8b-0ec84aeee203] Running
	I0923 10:23:24.493704   11896 system_pods.go:89] "registry-proxy-kwn7c" [fab26ceb-8538-4146-9f14-955f715b3dd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:23:24.493716   11896 system_pods.go:89] "snapshot-controller-56fcc65765-mtclj" [4d040c25-f747-448f-81e3-46dd810a9b80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.493727   11896 system_pods.go:89] "snapshot-controller-56fcc65765-zc5h7" [a8f9592b-9ae4-4ef5-aaeb-a421f92692bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:23:24.493735   11896 system_pods.go:89] "storage-provisioner" [c2bd96dc-bf5a-4a77-83f4-de923c76367f] Running
	I0923 10:23:24.493746   11896 system_pods.go:126] duration metric: took 48.009337ms to wait for k8s-apps to be running ...
	I0923 10:23:24.493758   11896 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:23:24.493809   11896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:23:24.513529   11896 system_svc.go:56] duration metric: took 19.75998ms WaitForService to wait for kubelet
	I0923 10:23:24.513564   11896 kubeadm.go:582] duration metric: took 45.379276732s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:23:24.513588   11896 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:23:24.686932   11896 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:23:24.686965   11896 node_conditions.go:123] node cpu capacity is 2
	I0923 10:23:24.686977   11896 node_conditions.go:105] duration metric: took 173.384337ms to run NodePressure ...
	I0923 10:23:24.686989   11896 start.go:241] waiting for startup goroutines ...
	I0923 10:23:24.740644   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.819562   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.820700   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.987200   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.241300   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.343424   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.343684   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.488088   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.740686   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.823744   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.824711   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.986603   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.245648   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.319158   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.320408   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.486134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.741656   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.818867   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.820585   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.986548   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.240557   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.319023   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.320864   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.486855   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.740443   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.820340   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.820749   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.985688   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.240798   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.319348   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.320307   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.485922   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.740883   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.819269   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.821099   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.986140   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.241577   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.319821   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.320555   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.485837   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.739828   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.819216   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.820683   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.986090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.240500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.318390   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.320276   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.485561   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.740036   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.819427   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.820954   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.986481   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.242825   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.319201   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.321609   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.486421   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.740721   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.820745   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.821165   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.987716   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.240042   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.320623   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.320636   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.487536   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.740655   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.819092   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.820745   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.986500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.240919   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.319548   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.321128   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.486183   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.740178   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.818613   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.830934   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.234483   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.240705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.318188   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.321549   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.486252   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.741090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.818534   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.820864   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.986959   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.241200   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.318668   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.320010   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.487738   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.740755   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.846303   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.847461   11896 kapi.go:107] duration metric: took 48.532653767s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:23:35.986432   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.240073   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.320490   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.486975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.740607   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.821390   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.985931   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.240868   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.320823   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.486628   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.740321   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.819943   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.986559   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.240591   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.320406   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.485374   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.740067   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.821158   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.985749   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.241435   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.320711   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.487179   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.740799   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.820591   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.987098   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.239842   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.321547   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.485975   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.740732   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.821115   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.985768   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.240307   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.320076   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.486615   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.739979   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.820446   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.985972   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.240670   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.320827   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.486416   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.740430   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.821019   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.986853   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.240848   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.320450   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.487018   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.740754   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.841792   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.986488   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.240295   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.320589   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.485911   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.741445   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.820755   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.987203   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.243595   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.320568   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.490033   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.740061   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.821180   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.988792   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.240043   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.320715   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.487369   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.740245   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.819995   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.986874   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.243429   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.345068   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.489391   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.740015   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.820624   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.992212   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.241134   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.323440   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.486090   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.740606   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.820802   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.991332   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.240530   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.417715   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.487512   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.742506   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.820524   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.986559   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.239803   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.320349   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.486994   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.741224   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.821593   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.986425   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.240567   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.320321   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.486405   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.740877   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.820749   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.986484   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.240827   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.320722   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.487461   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.740499   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.841584   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.986500   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.241311   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.324855   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.487424   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.740118   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.824677   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.985851   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.240751   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.320803   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.487062   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.740218   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.831563   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.987830   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.240818   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.332865   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.501106   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.740363   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.822929   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.990443   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.241141   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.806895   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.807674   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.808159   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.820644   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.986084   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.241298   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.327433   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.487016   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.740517   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.820018   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.986945   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.240591   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.321016   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.487366   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.740865   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.820699   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.985850   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.479008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.479029   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.489051   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.741335   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.842531   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.986871   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.240003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.320593   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.487659   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.739808   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.824778   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.986705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.241008   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.320728   11896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.486320   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.742003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.820606   11896 kapi.go:107] duration metric: took 1m14.504617876s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:24:01.986382   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.240173   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.510479   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.759085   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.989516   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.240478   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.486506   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.739595   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.987737   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.240394   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.485945   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.740361   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.987426   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.241017   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.486902   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.740789   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.986398   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.240422   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.488497   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.740174   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.986390   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.239997   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.486563   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.740856   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.985705   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.239980   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.487157   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.740726   11896 kapi.go:107] duration metric: took 1m18.504006563s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:24:08.742218   11896 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-230451 cluster.
	I0923 10:24:08.743548   11896 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:24:08.744742   11896 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:24:08.986003   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.487085   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.986761   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.486537   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.996063   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.487998   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.986105   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.489482   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.986286   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.531021   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.985832   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.486937   11896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.988956   11896 kapi.go:107] duration metric: took 1m27.0074062s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:24:14.990655   11896 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, metrics-server, inspektor-gadget, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 10:24:14.991930   11896 addons.go:510] duration metric: took 1m35.857607898s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin default-storageclass metrics-server inspektor-gadget storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 10:24:14.991968   11896 start.go:246] waiting for cluster config update ...
	I0923 10:24:14.991993   11896 start.go:255] writing updated cluster config ...
	I0923 10:24:14.992266   11896 ssh_runner.go:195] Run: rm -f paused
	I0923 10:24:15.042846   11896 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:24:15.044785   11896 out.go:177] * Done! kubectl is now configured to use "addons-230451" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.215743113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=464ab437-9d20-41a5-9cf7-57d07d7ebad9 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.216769474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4445d05-a9e3-4ed7-b9d8-75422f184e6c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.218142281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087849218116088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4445d05-a9e3-4ed7-b9d8-75422f184e6c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.218805622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=844e72f5-f77b-48ff-8900-6b5c440bf1b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.218877743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=844e72f5-f77b-48ff-8900-6b5c440bf1b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.219134448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fabf94d10ff5910cdf91b9c74e38182768d3c0d979640e2a7b368d8426e419f,PodSandboxId:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727087759156985347,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7f36927a761c0252d6fb76a287d0becb9333ae1b3551c560e89951871b454e,PodSandboxId:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727087617216010464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727087000210496909,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727086963
270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=844e72f5-f77b-48ff-8900-6b5c440bf1b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.254079584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fffb689-65df-4a3f-a9b5-5e7edb433aa4 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.254149603Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fffb689-65df-4a3f-a9b5-5e7edb433aa4 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.255256730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1fdbcf4-9b52-416d-8c1d-a6dafac2dd37 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.256376554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087849256347527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1fdbcf4-9b52-416d-8c1d-a6dafac2dd37 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.256874263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c26b6e06-57fc-4b61-923c-d78817c96292 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.256948140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c26b6e06-57fc-4b61-923c-d78817c96292 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.257295500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fabf94d10ff5910cdf91b9c74e38182768d3c0d979640e2a7b368d8426e419f,PodSandboxId:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727087759156985347,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7f36927a761c0252d6fb76a287d0becb9333ae1b3551c560e89951871b454e,PodSandboxId:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727087617216010464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727087000210496909,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727086963
270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c26b6e06-57fc-4b61-923c-d78817c96292 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.295170975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=782c262c-b894-4412-86a2-45d914ab728f name=/runtime.v1.RuntimeService/Version
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.295395853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=782c262c-b894-4412-86a2-45d914ab728f name=/runtime.v1.RuntimeService/Version
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.296942623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=019df681-74c0-4a6f-a0f3-db3f93d03a7a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.298102746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087849298077300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=019df681-74c0-4a6f-a0f3-db3f93d03a7a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.298937750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10c8fbb8-737f-43d9-82e0-79f56dd5515a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.298993211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10c8fbb8-737f-43d9-82e0-79f56dd5515a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.299225067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fabf94d10ff5910cdf91b9c74e38182768d3c0d979640e2a7b368d8426e419f,PodSandboxId:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727087759156985347,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7f36927a761c0252d6fb76a287d0becb9333ae1b3551c560e89951871b454e,PodSandboxId:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727087617216010464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727087000210496909,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727086963
270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10c8fbb8-737f-43d9-82e0-79f56dd5515a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.320542344Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=cf4b642b-825f-48d2-98c3-1b355459e36e name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.320814909Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-trsjs,Uid:144a678c-016e-44a9-82ac-25f14e9771c8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727087756285216880,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:35:55.973175588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&PodSandboxMetadata{Name:nginx,Uid:5b95300c-41ad-4e8f-8edb-9269b715bfdc,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1727087613066495568,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:33:32.752868107Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2ed23227db957f8fc7b932a63224056dbcd0b46d9e52c180f49f6d49878f0d7,Metadata:&PodSandboxMetadata{Name:busybox,Uid:7195e8e7-df5f-4972-ac47-55b4552c6aba,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727087055671164910,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7195e8e7-df5f-4972-ac47-55b4552c6aba,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:24:15.351942602Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7accadc36938115bad
09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-r2dxj,Uid:0c387b0a-745d-45ec-9b40-90e0be48f019,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727087034052098079,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:22:50.022180853Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c2bd96dc-bf5a-4a77-83f4-de923c76367f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727086964581970483,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner
,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-23T10:22:44.266790607Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-vx2z2,Uid:e950a717-9855-4b25-82a8-ac71b9a3a180,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727086963969234417,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:22:43.644527584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&PodSandboxMetadata{Name:kube-proxy-2f5tn,Uid:ecde87e2-ab31-4b8b-9c74-67efa7870d45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727086960589103791,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:22:38.778223746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7mfbw,Uid:04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727086960133651883,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:22:39.215026279Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodS
andbox{Id:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&PodSandboxMetadata{Name:etcd-addons-230451,Uid:319541069575dc2904a77d1523b9e738,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727086947724884736,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.142:2379,kubernetes.io/config.hash: 319541069575dc2904a77d1523b9e738,kubernetes.io/config.seen: 2024-09-23T10:22:27.238705272Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-230451,Uid:3da2f0be1013d68fc6143c532893824c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17270
86947719968964,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3da2f0be1013d68fc6143c532893824c,kubernetes.io/config.seen: 2024-09-23T10:22:27.238710203Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-230451,Uid:a2cce755653da329400b5f18f34e133d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727086947714435597,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,tier: control-plane,},Annotations
:map[string]string{kubernetes.io/config.hash: a2cce755653da329400b5f18f34e133d,kubernetes.io/config.seen: 2024-09-23T10:22:27.238711022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-230451,Uid:5e05fb56ce3d3bcb3df5638c4e8cb3ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727086947709281842,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.142:8443,kubernetes.io/config.hash: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,kubernetes.io/config.seen: 2024-09-23T10:22:27.238709105Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector
/interceptors.go:74" id=cf4b642b-825f-48d2-98c3-1b355459e36e name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.321891672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e14c5ec7-6776-449a-bb96-d485f2c99fc6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.321968921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e14c5ec7-6776-449a-bb96-d485f2c99fc6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:37:29 addons-230451 crio[662]: time="2024-09-23 10:37:29.322227434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fabf94d10ff5910cdf91b9c74e38182768d3c0d979640e2a7b368d8426e419f,PodSandboxId:8c51891f1ece5e33d0adb82454e14ad83e27713b0dac8395c21254ab4b74b48c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727087759156985347,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-trsjs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 144a678c-016e-44a9-82ac-25f14e9771c8,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7f36927a761c0252d6fb76a287d0becb9333ae1b3551c560e89951871b454e,PodSandboxId:d5acbfd4821f0758fd528de7e2df786cc8a40fa623363495fefad12d58788eeb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727087617216010464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b95300c-41ad-4e8f-8edb-9269b715bfdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b,PodSandboxId:7accadc36938115bad09bd217ea66002e814267d23fd28285beb34bd5e0ee1f8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727087048431050901,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-r2dxj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0c387b0a-745d-45ec-9b40-90e0be48f019,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992df9568fa604331e730fefe25c74e8ca47bbc7a4a322042af5d0ea01b1eb95,PodSandboxId:9b9a78bf3e3fb7d53f5654cbb5b4f38ee8ee2a32f49e4dc5b619f688273e8db3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727087000210496909,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-vx2z2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e950a717-9855-4b25-82a8-ac71b9a3a180,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024,PodSandboxId:8f190e871173025fc87c99939a26b9bf17e4ee94acfaecd17d11636ab2e05c95,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727086965678846888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd96dc-bf5a-4a77-83f4-de923c76367f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131,PodSandboxId:248e92b5f56804a3bb72e43ca0237e37bc186cac14a212a8910b36979021ddbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727086963
270117679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7mfbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d690db-b3f4-4949-ba3f-7bd3a74f4eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a,PodSandboxId:11212750411bfd0906a06bc69885eb608ea7503c1877d0312579f8ff09a0b3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727086961256751701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2f5tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecde87e2-ab31-4b8b-9c74-67efa7870d45,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe,PodSandboxId:45cd3db2a1e7a9e6540d43fbfa2140bb716bbc742893311eefa3264413e5a5f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727086948651063654,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2cce755653da329400b5f18f34e133d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780,PodSandboxId:5a2773265dbdcc54bde5afab8048506b4632f98bcf9c113edca306390a2c7316,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727086948645284634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3da2f0be1013d68fc6143c532893824c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb,PodSandboxId:48d959ccb4da3ac27bfb9d155b3a948feb95c2e906b3037f2dde4e796be6d029,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727086948596912957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 319541069575dc2904a77d1523b9e738,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e,PodSandboxId:35551829a0c356ad94640d836e84f5f3fa53f193a4ffdd6eb35b7195ee3ed65e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727086948324936618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-230451,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e05fb56ce3d3bcb3df5638c4e8cb3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e14c5ec7-6776-449a-bb96-d485f2c99fc6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0fabf94d10ff5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   8c51891f1ece5       hello-world-app-55bf9c44b4-trsjs
	5c7f36927a761       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         3 minutes ago        Running             nginx                     0                   d5acbfd4821f0       nginx
	63f8091f52d77       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago       Running             gcp-auth                  0                   7accadc369381       gcp-auth-89d5ffd79-r2dxj
	992df9568fa60       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago       Running             metrics-server            0                   9b9a78bf3e3fb       metrics-server-84c5f94fbc-vx2z2
	48b883a7cf210       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago       Running             storage-provisioner       0                   8f190e8711730       storage-provisioner
	6fed682ab380f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago       Running             coredns                   0                   248e92b5f5680       coredns-7c65d6cfc9-7mfbw
	6238ede2ce75e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        14 minutes ago       Running             kube-proxy                0                   11212750411bf       kube-proxy-2f5tn
	9b030424709a2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago       Running             kube-scheduler            0                   45cd3db2a1e7a       kube-scheduler-addons-230451
	e428589b0fa5f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago       Running             kube-controller-manager   0                   5a2773265dbdc       kube-controller-manager-addons-230451
	455a0db0cbf9d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago       Running             etcd                      0                   48d959ccb4da3       etcd-addons-230451
	853b9960a36de       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago       Running             kube-apiserver            0                   35551829a0c35       kube-apiserver-addons-230451
	
	
	==> coredns [6fed682ab380f1436efe7946bc1a85cc07c03cc60acd8ac371b5b00d8a752131] <==
	[INFO] 127.0.0.1:53719 - 30820 "HINFO IN 6685210372362929190.536412389867895458. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01361851s
	[INFO] 10.244.0.8:57781 - 24672 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0003346s
	[INFO] 10.244.0.8:57781 - 61805 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149843s
	[INFO] 10.244.0.8:51455 - 24269 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117247s
	[INFO] 10.244.0.8:51455 - 30147 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000132017s
	[INFO] 10.244.0.8:49756 - 27783 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008366s
	[INFO] 10.244.0.8:49756 - 27013 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096337s
	[INFO] 10.244.0.8:57401 - 50559 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000099583s
	[INFO] 10.244.0.8:57401 - 121 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000163833s
	[INFO] 10.244.0.8:41582 - 43809 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171459s
	[INFO] 10.244.0.8:41582 - 3879 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000206793s
	[INFO] 10.244.0.8:34747 - 26460 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006276s
	[INFO] 10.244.0.8:34747 - 25950 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029536s
	[INFO] 10.244.0.8:42596 - 15504 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050529s
	[INFO] 10.244.0.8:42596 - 29358 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049956s
	[INFO] 10.244.0.8:46828 - 21289 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000081739s
	[INFO] 10.244.0.8:46828 - 11311 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096602s
	[INFO] 10.244.0.21:47112 - 35978 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00044167s
	[INFO] 10.244.0.21:39898 - 22255 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008491s
	[INFO] 10.244.0.21:43466 - 53222 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131557s
	[INFO] 10.244.0.21:52335 - 61823 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159688s
	[INFO] 10.244.0.21:42381 - 33204 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118433s
	[INFO] 10.244.0.21:51980 - 28250 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104154s
	[INFO] 10.244.0.21:37226 - 50868 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00097457s
	[INFO] 10.244.0.21:35684 - 29625 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000645401s
	
	
	==> describe nodes <==
	Name:               addons-230451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-230451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-230451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_22_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-230451
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:22:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-230451
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:37:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:36:10 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:36:10 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:36:10 +0000   Mon, 23 Sep 2024 10:22:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:36:10 +0000   Mon, 23 Sep 2024 10:22:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    addons-230451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 610d00e132ff4d0bb3d2f3caf1b3d48a
	  System UUID:                610d00e1-32ff-4d0b-b3d2-f3caf1b3d48a
	  Boot ID:                    ccc8674b-e396-46a3-bf38-22f6c0d79432
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-trsjs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  gcp-auth                    gcp-auth-89d5ffd79-r2dxj                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-7mfbw                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-230451                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-230451             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-230451    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-2f5tn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-230451             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-vx2z2          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         14m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-230451 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-230451 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-230451 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m   kubelet          Node addons-230451 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node addons-230451 event: Registered Node addons-230451 in Controller
	
	
	==> dmesg <==
	[Sep23 10:23] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.997386] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.219809] kauditd_printk_skb: 26 callbacks suppressed
	[ +20.523154] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.175400] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.134104] kauditd_printk_skb: 71 callbacks suppressed
	[Sep23 10:24] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.640337] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.746008] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.771381] kauditd_printk_skb: 45 callbacks suppressed
	[Sep23 10:25] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:27] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:29] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:32] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.410642] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.215645] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.744354] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.359012] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 10:33] kauditd_printk_skb: 2 callbacks suppressed
	[ +26.799993] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.083276] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.110104] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.862454] kauditd_printk_skb: 37 callbacks suppressed
	[Sep23 10:35] kauditd_printk_skb: 6 callbacks suppressed
	[Sep23 10:36] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [455a0db0cbf9d938c7a2d50a0cca911ffbd5a2ce28176c31e7c753f3b1921adb] <==
	{"level":"info","ts":"2024-09-23T10:23:56.789725Z","caller":"traceutil/trace.go:171","msg":"trace[2052105943] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1046; }","duration":"386.955803ms","start":"2024-09-23T10:23:56.402762Z","end":"2024-09-23T10:23:56.789718Z","steps":["trace[2052105943] 'range keys from in-memory index tree'  (duration: 386.853751ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.789745Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.402719Z","time spent":"387.021008ms","remote":"127.0.0.1:56784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-23T10:23:56.789891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.104712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:56.789926Z","caller":"traceutil/trace.go:171","msg":"trace[1887252976] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1046; }","duration":"316.139111ms","start":"2024-09-23T10:23:56.473782Z","end":"2024-09-23T10:23:56.789921Z","steps":["trace[1887252976] 'range keys from in-memory index tree'  (duration: 316.059373ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.789943Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.473634Z","time spent":"316.304062ms","remote":"127.0.0.1:57028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-23T10:23:56.790488Z","caller":"traceutil/trace.go:171","msg":"trace[1993101087] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"300.658273ms","start":"2024-09-23T10:23:56.489821Z","end":"2024-09-23T10:23:56.790480Z","steps":["trace[1993101087] 'process raft request'  (duration: 297.906276ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:56.790623Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:56.489805Z","time spent":"300.723172ms","remote":"127.0.0.1:57094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" mod_revision:790 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" > >"}
	{"level":"info","ts":"2024-09-23T10:23:59.461550Z","caller":"traceutil/trace.go:171","msg":"trace[1713246877] linearizableReadLoop","detail":"{readStateIndex:1094; appliedIndex:1093; }","duration":"232.90659ms","start":"2024-09-23T10:23:59.228626Z","end":"2024-09-23T10:23:59.461533Z","steps":["trace[1713246877] 'read index received'  (duration: 231.853253ms)","trace[1713246877] 'applied index is now lower than readState.Index'  (duration: 1.052836ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:23:59.461773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.14172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:59.461821Z","caller":"traceutil/trace.go:171","msg":"trace[1810414376] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"233.215712ms","start":"2024-09-23T10:23:59.228599Z","end":"2024-09-23T10:23:59.461815Z","steps":["trace[1810414376] 'agreement among raft nodes before linearized reading'  (duration: 233.094125ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:23:59.461702Z","caller":"traceutil/trace.go:171","msg":"trace[1566092567] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"351.447386ms","start":"2024-09-23T10:23:59.110237Z","end":"2024-09-23T10:23:59.461684Z","steps":["trace[1566092567] 'process raft request'  (duration: 350.997358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:59.462122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.656543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:23:59.462168Z","caller":"traceutil/trace.go:171","msg":"trace[1196861560] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1060; }","duration":"154.708489ms","start":"2024-09-23T10:23:59.307453Z","end":"2024-09-23T10:23:59.462162Z","steps":["trace[1196861560] 'agreement among raft nodes before linearized reading'  (duration: 154.640705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:23:59.463122Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:23:59.110202Z","time spent":"351.753223ms","remote":"127.0.0.1:56906","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":699,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" mod_revision:1051 > success:<request_put:<key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" value_size:628 lease:839800514810162161 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-b2v2k.17f7d882804e921b\" > >"}
	{"level":"info","ts":"2024-09-23T10:24:21.903648Z","caller":"traceutil/trace.go:171","msg":"trace[1089261884] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"329.698815ms","start":"2024-09-23T10:24:21.573933Z","end":"2024-09-23T10:24:21.903631Z","steps":["trace[1089261884] 'process raft request'  (duration: 329.594188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:24:21.903769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:24:21.573911Z","time spent":"329.789617ms","remote":"127.0.0.1:56998","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1190 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-23T10:32:22.866527Z","caller":"traceutil/trace.go:171","msg":"trace[1341451039] transaction","detail":"{read_only:false; response_revision:1943; number_of_response:1; }","duration":"135.103828ms","start":"2024-09-23T10:32:22.731398Z","end":"2024-09-23T10:32:22.866501Z","steps":["trace[1341451039] 'process raft request'  (duration: 134.961155ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:29.856569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-09-23T10:32:29.884999Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1510,"took":"27.805671ms","hash":3200741289,"current-db-size-bytes":6541312,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3637248,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T10:32:29.885056Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3200741289,"revision":1510,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T10:32:55.602366Z","caller":"traceutil/trace.go:171","msg":"trace[225191809] linearizableReadLoop","detail":"{readStateIndex:2316; appliedIndex:2315; }","duration":"126.227212ms","start":"2024-09-23T10:32:55.476064Z","end":"2024-09-23T10:32:55.602291Z","steps":["trace[225191809] 'read index received'  (duration: 126.065779ms)","trace[225191809] 'applied index is now lower than readState.Index'  (duration: 161.03µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:32:55.602562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.447391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:32:55.602588Z","caller":"traceutil/trace.go:171","msg":"trace[894733726] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2160; }","duration":"126.522421ms","start":"2024-09-23T10:32:55.476060Z","end":"2024-09-23T10:32:55.602582Z","steps":["trace[894733726] 'agreement among raft nodes before linearized reading'  (duration: 126.428208ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:55.602743Z","caller":"traceutil/trace.go:171","msg":"trace[43643442] transaction","detail":"{read_only:false; response_revision:2160; number_of_response:1; }","duration":"129.84545ms","start":"2024-09-23T10:32:55.472891Z","end":"2024-09-23T10:32:55.602737Z","steps":["trace[43643442] 'process raft request'  (duration: 129.312421ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:33:00.762090Z","caller":"traceutil/trace.go:171","msg":"trace[648338775] transaction","detail":"{read_only:false; response_revision:2169; number_of_response:1; }","duration":"288.031158ms","start":"2024-09-23T10:33:00.473384Z","end":"2024-09-23T10:33:00.761415Z","steps":["trace[648338775] 'process raft request'  (duration: 287.71469ms)"],"step_count":1}
	
	
	==> gcp-auth [63f8091f52d77f9537c8f927fc608b30d092bc94b4cf6eba27a3bfd22e87d66b] <==
	2024/09/23 10:24:15 Ready to write response ...
	2024/09/23 10:24:15 Ready to marshal response ...
	2024/09/23 10:24:15 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:18 Ready to marshal response ...
	2024/09/23 10:32:18 Ready to write response ...
	2024/09/23 10:32:25 Ready to marshal response ...
	2024/09/23 10:32:25 Ready to write response ...
	2024/09/23 10:32:25 Ready to marshal response ...
	2024/09/23 10:32:25 Ready to write response ...
	2024/09/23 10:32:29 Ready to marshal response ...
	2024/09/23 10:32:29 Ready to write response ...
	2024/09/23 10:32:37 Ready to marshal response ...
	2024/09/23 10:32:37 Ready to write response ...
	2024/09/23 10:32:53 Ready to marshal response ...
	2024/09/23 10:32:53 Ready to write response ...
	2024/09/23 10:33:28 Ready to marshal response ...
	2024/09/23 10:33:28 Ready to write response ...
	2024/09/23 10:33:32 Ready to marshal response ...
	2024/09/23 10:33:32 Ready to write response ...
	2024/09/23 10:35:55 Ready to marshal response ...
	2024/09/23 10:35:55 Ready to write response ...
	
	
	==> kernel <==
	 10:37:29 up 15 min,  0 users,  load average: 0.70, 0.73, 0.58
	Linux addons-230451 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [853b9960a36dec977f435ebb513f64b6716f67a149abdba0958b01381df65f6e] <==
	E0923 10:24:23.987293       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.69.103:443: connect: connection refused" logger="UnhandledError"
	E0923 10:24:23.993204       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.69.103:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.69.103:443: connect: connection refused" logger="UnhandledError"
	I0923 10:24:24.062155       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 10:32:18.858064       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.199.8"}
	E0923 10:32:53.563750       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 10:33:08.344205       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 10:33:27.592624       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 10:33:28.618746       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 10:33:32.583253       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 10:33:32.794255       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.115.172"}
	I0923 10:33:45.762961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.763069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:45.780962       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.781083       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:45.808036       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.808930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:45.811049       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.811683       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:33:45.937816       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:33:45.937953       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 10:33:46.808898       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 10:33:46.938287       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 10:33:46.947106       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0923 10:35:56.152469       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.183.72"}
	E0923 10:35:58.637129       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [e428589b0fa5fb2bd70aacbad0c33a1e6d60cc0fa5f13384ce5ccd86c04de780] <==
	I0923 10:35:55.993894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.103µs"
	W0923 10:35:56.925547       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:56.925605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:35:58.554165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.503µs"
	I0923 10:35:58.556798       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0923 10:35:58.565977       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0923 10:35:59.977494       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.791789ms"
	I0923 10:35:59.978634       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="97.99µs"
	I0923 10:36:08.659677       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0923 10:36:10.771517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-230451"
	W0923 10:36:12.522935       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:36:12.522999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:36:20.937773       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:36:20.937811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:36:41.430712       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:36:41.430999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:36:42.332940       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:36:42.333050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:03.754951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:03.755021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:19.761131       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:19.761190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:22.839422       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:22.839539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:37:28.295862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.727µs"
	
	
	==> kube-proxy [6238ede2ce75e1973f2db001e826f5bdc935c841307ead8c4e2ae95e6e780e8a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 10:22:43.920909       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 10:22:44.021992       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.142"]
	E0923 10:22:44.022096       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:22:45.319016       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 10:22:45.319081       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 10:22:45.319124       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:22:45.327775       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:22:45.328048       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:22:45.328078       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:22:45.345796       1 config.go:199] "Starting service config controller"
	I0923 10:22:45.345835       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:22:45.345866       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:22:45.345870       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:22:45.350777       1 config.go:328] "Starting node config controller"
	I0923 10:22:45.350807       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:22:45.446542       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:22:45.446598       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:22:45.450897       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b030424709a2f592644ab0fd055041f3130302d02f62d73a3b292d4d3d95cfe] <==
	W0923 10:22:31.294807       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:22:31.294862       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:22:32.090971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:22:32.091289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.095004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:22:32.095037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.148723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.148834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.209219       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:22:32.209362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.290354       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.290448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.370809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.370910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.393003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:32.393122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.446838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:22:32.446961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.464976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:32.465158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.550414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:22:32.550554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:32.715850       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:22:32.715995       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 10:22:34.754020       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:36:34 addons-230451 kubelet[1205]: E0923 10:36:34.067779    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087794067189496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:36:35 addons-230451 kubelet[1205]: I0923 10:36:35.060921    1205 scope.go:117] "RemoveContainer" containerID="e06f961e39af1729fdd20c0130d1e51ab48fd6e9a777d323d3467041d5b37ae9"
	Sep 23 10:36:35 addons-230451 kubelet[1205]: I0923 10:36:35.081022    1205 scope.go:117] "RemoveContainer" containerID="1b37183ea0c554a083aaa2975fe96fec32dfb01dac41cebceada5a484ce6b149"
	Sep 23 10:36:36 addons-230451 kubelet[1205]: E0923 10:36:36.710733    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7195e8e7-df5f-4972-ac47-55b4552c6aba"
	Sep 23 10:36:44 addons-230451 kubelet[1205]: E0923 10:36:44.071143    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087804070696786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:36:44 addons-230451 kubelet[1205]: E0923 10:36:44.071177    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087804070696786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:36:48 addons-230451 kubelet[1205]: E0923 10:36:48.711073    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7195e8e7-df5f-4972-ac47-55b4552c6aba"
	Sep 23 10:36:54 addons-230451 kubelet[1205]: E0923 10:36:54.074095    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087814073482383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:36:54 addons-230451 kubelet[1205]: E0923 10:36:54.074448    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087814073482383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:37:01 addons-230451 kubelet[1205]: E0923 10:37:01.710468    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7195e8e7-df5f-4972-ac47-55b4552c6aba"
	Sep 23 10:37:04 addons-230451 kubelet[1205]: E0923 10:37:04.076753    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087824076271773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:37:04 addons-230451 kubelet[1205]: E0923 10:37:04.076794    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087824076271773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:37:13 addons-230451 kubelet[1205]: E0923 10:37:13.711937    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7195e8e7-df5f-4972-ac47-55b4552c6aba"
	Sep 23 10:37:14 addons-230451 kubelet[1205]: E0923 10:37:14.079091    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087834078663670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:37:14 addons-230451 kubelet[1205]: E0923 10:37:14.079229    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087834078663670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:37:24 addons-230451 kubelet[1205]: E0923 10:37:24.081513    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087844081135865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:37:24 addons-230451 kubelet[1205]: E0923 10:37:24.081571    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087844081135865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:37:24 addons-230451 kubelet[1205]: E0923 10:37:24.710129    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7195e8e7-df5f-4972-ac47-55b4552c6aba"
	Sep 23 10:37:28 addons-230451 kubelet[1205]: I0923 10:37:28.320439    1205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-trsjs" podStartSLOduration=90.72285457 podStartE2EDuration="1m33.320416697s" podCreationTimestamp="2024-09-23 10:35:55 +0000 UTC" firstStartedPulling="2024-09-23 10:35:56.542816858 +0000 UTC m=+802.968072547" lastFinishedPulling="2024-09-23 10:35:59.140378983 +0000 UTC m=+805.565634674" observedRunningTime="2024-09-23 10:35:59.968864123 +0000 UTC m=+806.394119833" watchObservedRunningTime="2024-09-23 10:37:28.320416697 +0000 UTC m=+894.745672702"
	Sep 23 10:37:29 addons-230451 kubelet[1205]: I0923 10:37:29.824614    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e950a717-9855-4b25-82a8-ac71b9a3a180-tmp-dir\") pod \"e950a717-9855-4b25-82a8-ac71b9a3a180\" (UID: \"e950a717-9855-4b25-82a8-ac71b9a3a180\") "
	Sep 23 10:37:29 addons-230451 kubelet[1205]: I0923 10:37:29.824657    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snzw5\" (UniqueName: \"kubernetes.io/projected/e950a717-9855-4b25-82a8-ac71b9a3a180-kube-api-access-snzw5\") pod \"e950a717-9855-4b25-82a8-ac71b9a3a180\" (UID: \"e950a717-9855-4b25-82a8-ac71b9a3a180\") "
	Sep 23 10:37:29 addons-230451 kubelet[1205]: I0923 10:37:29.826183    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e950a717-9855-4b25-82a8-ac71b9a3a180-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "e950a717-9855-4b25-82a8-ac71b9a3a180" (UID: "e950a717-9855-4b25-82a8-ac71b9a3a180"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 23 10:37:29 addons-230451 kubelet[1205]: I0923 10:37:29.828362    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e950a717-9855-4b25-82a8-ac71b9a3a180-kube-api-access-snzw5" (OuterVolumeSpecName: "kube-api-access-snzw5") pod "e950a717-9855-4b25-82a8-ac71b9a3a180" (UID: "e950a717-9855-4b25-82a8-ac71b9a3a180"). InnerVolumeSpecName "kube-api-access-snzw5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:37:29 addons-230451 kubelet[1205]: I0923 10:37:29.925599    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-snzw5\" (UniqueName: \"kubernetes.io/projected/e950a717-9855-4b25-82a8-ac71b9a3a180-kube-api-access-snzw5\") on node \"addons-230451\" DevicePath \"\""
	Sep 23 10:37:29 addons-230451 kubelet[1205]: I0923 10:37:29.925627    1205 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e950a717-9855-4b25-82a8-ac71b9a3a180-tmp-dir\") on node \"addons-230451\" DevicePath \"\""
	
	
	==> storage-provisioner [48b883a7cf210972dd23f723a6d33de69f215cfc68abb1a15da065bb89673024] <==
	I0923 10:22:46.156565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:22:46.196845       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:22:46.202503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:22:46.219408       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:22:46.219529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9!
	I0923 10:22:46.220596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dfe369ce-2e58-4a81-9323-18883c63569e", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9 became leader
	I0923 10:22:46.321402       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-230451_2e80d987-c1b1-4690-b53d-d504d098e6e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-230451 -n addons-230451
helpers_test.go:261: (dbg) Run:  kubectl --context addons-230451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-230451 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-230451 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-230451/192.168.39.142
	Start Time:       Mon, 23 Sep 2024 10:24:15 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ctzjs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ctzjs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  13m                  default-scheduler  Successfully assigned default/busybox to addons-230451
	  Normal   Pulling    11m (x4 over 13m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)    kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m2s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (300.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-870347 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-p9xlv" [c078100b-e569-47dd-8ea7-42af06ad116e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-870347 -n functional-870347
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-09-23 10:51:20.621314035 +0000 UTC m=+1816.690632612
functional_test.go:1799: (dbg) Run:  kubectl --context functional-870347 describe po mysql-6cdb49bbb-p9xlv -n default
functional_test.go:1799: (dbg) kubectl --context functional-870347 describe po mysql-6cdb49bbb-p9xlv -n default:
Name:             mysql-6cdb49bbb-p9xlv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-870347/192.168.39.190
Start Time:       Mon, 23 Sep 2024 10:41:20 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbjw6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tbjw6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  10m   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-p9xlv to functional-870347
functional_test.go:1799: (dbg) Run:  kubectl --context functional-870347 logs mysql-6cdb49bbb-p9xlv -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-870347 logs mysql-6cdb49bbb-p9xlv -n default: exit status 1 (72.685848ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-p9xlv" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-870347 logs mysql-6cdb49bbb-p9xlv -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-870347 -n functional-870347
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 logs -n 25: (1.50851291s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-870347 ssh sudo cat                                          | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /etc/ssl/certs/11139.pem                                                |                   |         |         |                     |                     |
	| service        | functional-870347 service                                               | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | hello-node-connect --url                                                |                   |         |         |                     |                     |
	| ssh            | functional-870347 ssh sudo cat                                          | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /usr/share/ca-certificates/11139.pem                                    |                   |         |         |                     |                     |
	| ssh            | functional-870347 ssh sudo cat                                          | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /etc/test/nested/copy/11139/hosts                                       |                   |         |         |                     |                     |
	| ssh            | functional-870347 ssh sudo cat                                          | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /etc/ssl/certs/51391683.0                                               |                   |         |         |                     |                     |
	| image          | functional-870347 image ls                                              | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	| ssh            | functional-870347 ssh sudo cat                                          | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /etc/ssl/certs/111392.pem                                               |                   |         |         |                     |                     |
	| ssh            | functional-870347 ssh sudo cat                                          | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /usr/share/ca-certificates/111392.pem                                   |                   |         |         |                     |                     |
	| image          | functional-870347 image save kicbase/echo-server:functional-870347      | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-870347 ssh sudo cat                                          | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| image          | functional-870347 image rm                                              | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | kicbase/echo-server:functional-870347                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-870347                                                       | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-870347                                                       | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-870347                                                       | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| image          | functional-870347 image ls                                              | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	| image          | functional-870347 image load                                            | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-870347 image ls                                              | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	| image          | functional-870347 image save --daemon                                   | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | kicbase/echo-server:functional-870347                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-870347                                                       | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-870347                                                       | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-870347 ssh pgrep                                             | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-870347                                                       | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-870347 image build -t                                        | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | localhost/my-image:functional-870347                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-870347                                                       | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-870347 image ls                                              | functional-870347 | jenkins | v1.34.0 | 23 Sep 24 10:41 UTC | 23 Sep 24 10:41 UTC |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:40:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:40:58.955087   20772 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:40:58.955333   20772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:58.955343   20772 out.go:358] Setting ErrFile to fd 2...
	I0923 10:40:58.955349   20772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:58.955563   20772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:40:58.956083   20772 out.go:352] Setting JSON to false
	I0923 10:40:58.956980   20772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1402,"bootTime":1727086657,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:40:58.957078   20772 start.go:139] virtualization: kvm guest
	I0923 10:40:58.959087   20772 out.go:177] * [functional-870347] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:40:58.960617   20772 notify.go:220] Checking for updates...
	I0923 10:40:58.960680   20772 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:40:58.962123   20772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:40:58.963330   20772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:40:58.964603   20772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:40:58.965735   20772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:40:58.966843   20772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:40:58.968318   20772 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:40:58.968713   20772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:40:58.968766   20772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:40:58.984429   20772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0923 10:40:58.984939   20772 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:40:58.985610   20772 main.go:141] libmachine: Using API Version  1
	I0923 10:40:58.985628   20772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:40:58.985904   20772 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:40:58.988528   20772 main.go:141] libmachine: (functional-870347) Calling .DriverName
	I0923 10:40:58.988820   20772 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:40:58.989282   20772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:40:58.989347   20772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:40:59.005632   20772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0923 10:40:59.006009   20772 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:40:59.006489   20772 main.go:141] libmachine: Using API Version  1
	I0923 10:40:59.006518   20772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:40:59.006804   20772 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:40:59.007095   20772 main.go:141] libmachine: (functional-870347) Calling .DriverName
	I0923 10:40:59.042342   20772 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 10:40:59.043519   20772 start.go:297] selected driver: kvm2
	I0923 10:40:59.043536   20772 start.go:901] validating driver "kvm2" against &{Name:functional-870347 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-870347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:40:59.043687   20772 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:40:59.045098   20772 cni.go:84] Creating CNI manager for ""
	I0923 10:40:59.045167   20772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:40:59.045250   20772 start.go:340] cluster config:
	{Name:functional-870347 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-870347 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:40:59.046776   20772 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.412295953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088681412265695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3f12ce6-66c3-4cea-b47c-a0edb4ec9ad0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.412754464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f28a0a7e-01a9-431f-addc-70e1c94720dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.412903554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f28a0a7e-01a9-431f-addc-70e1c94720dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.413472369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4fa705799dbac37ee4d688e10cee38da0f0d6073a3c06aa609e18363b7ac24a,PodSandboxId:846871cb907057a031c5e3d5730570f1d241c142dfa41786c412ba04bafc5c13,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1727088094207923345,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5328749a-cfc1-4dda-8d9a-ced26ee5c083,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892dedf7e677481a428bd1d83ba1cf54a4273e3b687c0d8ee8a054269bd1e09a,PodSandboxId:169ebc43e302d46288ef454ded372f3423ff53ee2b79fe6523ef8cb54cbc5003,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1727088085496556383,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-dbgqp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c727dfd3-ab53-48e3-8d9e-e80b4d46104b,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports:
[{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec456493ea341d16f320e3e13f78c1135e3e64bf4ae167d3afc5848fd8f16664,PodSandboxId:59d1d6437ebb3a72e2fcd8520980fcc844d85d4ac3c3fac56092d5aaaaaed9ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727088074106510374,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-gc8ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7086ffda-bfe3-4d93-afcf-4d51c80b1156,},Annotations:map[string]string{io.kuber
netes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93de6e04e1a9ef0af026598667162bf94036aa63f6e427b7a716c595737b7fa4,PodSandboxId:c49b8098d67cf77bcf58cbbfa1fd4963f450c3d6e00c01d958b377a9fb9b72d1,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1727088073908695275,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-nszv7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e986f31e-4968-4e8e-8
3f1-2eef2ebcd831,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1525d03f296e61064c22049104dd56ea94f5df0608bfbe96925cd9361b1b66,PodSandboxId:56fbe36ae1d05d8534ab625b2d88a5307ab70da2a2c2bc11ed6ea475e27fe1a2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1727088065983705161,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.
kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c674d487-4763-48bc-aac4-b820df86baed,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d7af89d1538a26d1970a3908388ed334de81effe1700845c213f40ec1e14560,PodSandboxId:330835474aa04f5de836085108d7dbb6656be99bbcc92bde26cc01585fd9fab6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727088062167853541,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-b
nrp8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 657df601-1bb6-4cc0-8e2f-bab433678183,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c1a0e0be51792c80eddefbf6b897c48b5ffd226e1cc724493428292288dfd4,PodSandboxId:a3eca749513c008b122dfd9ce0c963fdd66a7f84eb97337e9bbc1c458e2db6d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088032144712849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b54319a0-aafe-4e68-addd-71b31e5ccde6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88229ec096945bc87d66e6aee04c7103b5440f1fa32d184367e8688138523d91,PodSandboxId:dfa6cdb106f57cc422652823b445e581d0e3f9ab312054bca0a9e0c3a3af3ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727088032143659634,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kl2g8,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3af3dd94-0d58-45ba-862b-5d94ff669547,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad755cb524b1819a3d2af7aa49674d2ee60b3e8308d5947a2a45eff41de3422f,PodSandboxId:4c9a7677c900bf302b4a928b68448e39cd5831fed53b59af118b66863008ed05,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088032129074327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xlkcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e530bbff-54
d5-48b5-a16d-5bd1d7c5da8d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7e3bcb45a5411a7288d6916873871be99c7a9a8735e57b9f47968aa9e46dd,PodSandboxId:5dc30c48d88dc50580d2ce25e50bf0b26ddf197700a897138c0704087e8a0e76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17270
88028461832566,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf5d091937f7e261ae04cf30b774da9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae06c18d1bf07cacde022d2f62ce81dd6190bb8792558ce2334cac074f7767,PodSandboxId:cdd35295e3c0d88657005a9c421167426ab2e406d6085f9019a93d9cdaba7cff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088028309681048,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f16ee05ca7af7a8498b81222b3afaf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d5a32228a9839e9e21750b3f5a6c02f01be0af463c9ebd8226268d8124432c,PodSandboxId:ecf87791bc88633dbf4213b1967bde2b39367d356a06e1f488c1739fa3ce5f1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088028347362087,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a842605e607159f5ae00a1df02b0dce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d831a5ae2952706a0b202abb91a8499d5e1c2ddd165475f28a91c53341c00a38,PodSandboxId:5480e44fcb6470fbc9a7d40696866e097334668ea6f19f3ed7452679fd1508ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088028333001367,Labels:map[string]string{io
.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c79220d073b9c31a23ce96af02d1b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1818e8ced4ac95b3b06008fd10492e413c274475e6c082e31c45fdcf734bf636,PodSandboxId:06c67af040a6d73a44b72ccee85ed508a1c041f5fde2e20db5364f88e37d1050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727087989543616811,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xlkcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e530bbff-54d5-48b5-a16d-5bd1d7c5da8d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a082847276b61e5d653f39a9222198975faccf1f7adf8efac176bacead8a9ede,PodSandboxId:736b2651e9e25b8dabb03ffa0c46194ef08e2345b29d13ac8b40679dc1a41b78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727087989258930032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kl2g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af3dd94-0d58-45ba-862b-5d94ff669547,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fc13fd1746ac5718fd662dc0b65d716c40a0e3c48b346602fe3818bd822e82,PodSandboxId:a3048fe319722c3c593f20f5ad437f1d95e0833c4d3d89a7dc54ec51084c5cf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727087989190288980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54319a0-aafe-4e68-addd-71b31e5ccde6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb50ba39a2ec1ba8f04f69f0943e085d9a06ccb50c3e72bcd20743804ce804,PodSandboxId:6c5e2395f383746441f2676b2a3c0546a561197966b0fe7bc98d98b435a88361,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727087985464503006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f16ee05ca7af7a8498b81222b3afaf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb792513e21ec026c32fa3c8ac4f7cedc3a0e2b64d0ff8cb1377cef145f80b30,PodSandboxId:bc93a8bdf9299d9ec73c7085a47a52a9cf2382e9dfb74b08885d7596d049981c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471
a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727087985409671091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a842605e607159f5ae00a1df02b0dce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6896a210a1e859cc0afbf4104ea38b9186b6b4fbd8950778899ea80d6723989,PodSandboxId:5d4b73bb2638fe7a65fc74818622c5e968aadd89491cce61b9c9098686307430,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbb
e954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727087985384979949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c79220d073b9c31a23ce96af02d1b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f28a0a7e-01a9-431f-addc-70e1c94720dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.460466664Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fdc8e74-1f41-466f-aff6-a07311e3efcd name=/runtime.v1.RuntimeService/Version
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.460591785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fdc8e74-1f41-466f-aff6-a07311e3efcd name=/runtime.v1.RuntimeService/Version
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.461971976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7054cbb4-95fd-4511-b608-f9d3b07f8cc6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.462674333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088681462633506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7054cbb4-95fd-4511-b608-f9d3b07f8cc6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.463519394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9db3e56-b9e1-4312-80d5-97456385de17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.463624457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9db3e56-b9e1-4312-80d5-97456385de17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.464046950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4fa705799dbac37ee4d688e10cee38da0f0d6073a3c06aa609e18363b7ac24a,PodSandboxId:846871cb907057a031c5e3d5730570f1d241c142dfa41786c412ba04bafc5c13,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1727088094207923345,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5328749a-cfc1-4dda-8d9a-ced26ee5c083,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892dedf7e677481a428bd1d83ba1cf54a4273e3b687c0d8ee8a054269bd1e09a,PodSandboxId:169ebc43e302d46288ef454ded372f3423ff53ee2b79fe6523ef8cb54cbc5003,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1727088085496556383,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-dbgqp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c727dfd3-ab53-48e3-8d9e-e80b4d46104b,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports:
[{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec456493ea341d16f320e3e13f78c1135e3e64bf4ae167d3afc5848fd8f16664,PodSandboxId:59d1d6437ebb3a72e2fcd8520980fcc844d85d4ac3c3fac56092d5aaaaaed9ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727088074106510374,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-gc8ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7086ffda-bfe3-4d93-afcf-4d51c80b1156,},Annotations:map[string]string{io.kuber
netes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93de6e04e1a9ef0af026598667162bf94036aa63f6e427b7a716c595737b7fa4,PodSandboxId:c49b8098d67cf77bcf58cbbfa1fd4963f450c3d6e00c01d958b377a9fb9b72d1,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1727088073908695275,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-nszv7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e986f31e-4968-4e8e-8
3f1-2eef2ebcd831,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1525d03f296e61064c22049104dd56ea94f5df0608bfbe96925cd9361b1b66,PodSandboxId:56fbe36ae1d05d8534ab625b2d88a5307ab70da2a2c2bc11ed6ea475e27fe1a2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1727088065983705161,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.
kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c674d487-4763-48bc-aac4-b820df86baed,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d7af89d1538a26d1970a3908388ed334de81effe1700845c213f40ec1e14560,PodSandboxId:330835474aa04f5de836085108d7dbb6656be99bbcc92bde26cc01585fd9fab6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727088062167853541,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-b
nrp8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 657df601-1bb6-4cc0-8e2f-bab433678183,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c1a0e0be51792c80eddefbf6b897c48b5ffd226e1cc724493428292288dfd4,PodSandboxId:a3eca749513c008b122dfd9ce0c963fdd66a7f84eb97337e9bbc1c458e2db6d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088032144712849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b54319a0-aafe-4e68-addd-71b31e5ccde6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88229ec096945bc87d66e6aee04c7103b5440f1fa32d184367e8688138523d91,PodSandboxId:dfa6cdb106f57cc422652823b445e581d0e3f9ab312054bca0a9e0c3a3af3ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727088032143659634,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kl2g8,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3af3dd94-0d58-45ba-862b-5d94ff669547,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad755cb524b1819a3d2af7aa49674d2ee60b3e8308d5947a2a45eff41de3422f,PodSandboxId:4c9a7677c900bf302b4a928b68448e39cd5831fed53b59af118b66863008ed05,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088032129074327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xlkcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e530bbff-54
d5-48b5-a16d-5bd1d7c5da8d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7e3bcb45a5411a7288d6916873871be99c7a9a8735e57b9f47968aa9e46dd,PodSandboxId:5dc30c48d88dc50580d2ce25e50bf0b26ddf197700a897138c0704087e8a0e76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17270
88028461832566,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf5d091937f7e261ae04cf30b774da9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae06c18d1bf07cacde022d2f62ce81dd6190bb8792558ce2334cac074f7767,PodSandboxId:cdd35295e3c0d88657005a9c421167426ab2e406d6085f9019a93d9cdaba7cff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088028309681048,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f16ee05ca7af7a8498b81222b3afaf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d5a32228a9839e9e21750b3f5a6c02f01be0af463c9ebd8226268d8124432c,PodSandboxId:ecf87791bc88633dbf4213b1967bde2b39367d356a06e1f488c1739fa3ce5f1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088028347362087,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a842605e607159f5ae00a1df02b0dce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d831a5ae2952706a0b202abb91a8499d5e1c2ddd165475f28a91c53341c00a38,PodSandboxId:5480e44fcb6470fbc9a7d40696866e097334668ea6f19f3ed7452679fd1508ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088028333001367,Labels:map[string]string{io
.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c79220d073b9c31a23ce96af02d1b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1818e8ced4ac95b3b06008fd10492e413c274475e6c082e31c45fdcf734bf636,PodSandboxId:06c67af040a6d73a44b72ccee85ed508a1c041f5fde2e20db5364f88e37d1050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727087989543616811,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xlkcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e530bbff-54d5-48b5-a16d-5bd1d7c5da8d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a082847276b61e5d653f39a9222198975faccf1f7adf8efac176bacead8a9ede,PodSandboxId:736b2651e9e25b8dabb03ffa0c46194ef08e2345b29d13ac8b40679dc1a41b78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727087989258930032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kl2g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af3dd94-0d58-45ba-862b-5d94ff669547,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fc13fd1746ac5718fd662dc0b65d716c40a0e3c48b346602fe3818bd822e82,PodSandboxId:a3048fe319722c3c593f20f5ad437f1d95e0833c4d3d89a7dc54ec51084c5cf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727087989190288980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54319a0-aafe-4e68-addd-71b31e5ccde6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb50ba39a2ec1ba8f04f69f0943e085d9a06ccb50c3e72bcd20743804ce804,PodSandboxId:6c5e2395f383746441f2676b2a3c0546a561197966b0fe7bc98d98b435a88361,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727087985464503006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f16ee05ca7af7a8498b81222b3afaf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb792513e21ec026c32fa3c8ac4f7cedc3a0e2b64d0ff8cb1377cef145f80b30,PodSandboxId:bc93a8bdf9299d9ec73c7085a47a52a9cf2382e9dfb74b08885d7596d049981c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471
a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727087985409671091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a842605e607159f5ae00a1df02b0dce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6896a210a1e859cc0afbf4104ea38b9186b6b4fbd8950778899ea80d6723989,PodSandboxId:5d4b73bb2638fe7a65fc74818622c5e968aadd89491cce61b9c9098686307430,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbb
e954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727087985384979949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c79220d073b9c31a23ce96af02d1b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9db3e56-b9e1-4312-80d5-97456385de17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.497636624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=900b897c-3b11-4c5a-8f0e-262dbda841b7 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.497711983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=900b897c-3b11-4c5a-8f0e-262dbda841b7 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.498544041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d02010e-e748-437d-b9ef-3b7e0be28ef9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.499675556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088681499650721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d02010e-e748-437d-b9ef-3b7e0be28ef9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.500680425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8321f432-bcdb-4dea-bdb8-a0b503f2e43a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.500754290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8321f432-bcdb-4dea-bdb8-a0b503f2e43a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.501261036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4fa705799dbac37ee4d688e10cee38da0f0d6073a3c06aa609e18363b7ac24a,PodSandboxId:846871cb907057a031c5e3d5730570f1d241c142dfa41786c412ba04bafc5c13,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1727088094207923345,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5328749a-cfc1-4dda-8d9a-ced26ee5c083,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892dedf7e677481a428bd1d83ba1cf54a4273e3b687c0d8ee8a054269bd1e09a,PodSandboxId:169ebc43e302d46288ef454ded372f3423ff53ee2b79fe6523ef8cb54cbc5003,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1727088085496556383,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-dbgqp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c727dfd3-ab53-48e3-8d9e-e80b4d46104b,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports:
[{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec456493ea341d16f320e3e13f78c1135e3e64bf4ae167d3afc5848fd8f16664,PodSandboxId:59d1d6437ebb3a72e2fcd8520980fcc844d85d4ac3c3fac56092d5aaaaaed9ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727088074106510374,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-gc8ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7086ffda-bfe3-4d93-afcf-4d51c80b1156,},Annotations:map[string]string{io.kuber
netes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93de6e04e1a9ef0af026598667162bf94036aa63f6e427b7a716c595737b7fa4,PodSandboxId:c49b8098d67cf77bcf58cbbfa1fd4963f450c3d6e00c01d958b377a9fb9b72d1,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1727088073908695275,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-nszv7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e986f31e-4968-4e8e-8
3f1-2eef2ebcd831,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1525d03f296e61064c22049104dd56ea94f5df0608bfbe96925cd9361b1b66,PodSandboxId:56fbe36ae1d05d8534ab625b2d88a5307ab70da2a2c2bc11ed6ea475e27fe1a2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1727088065983705161,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.
kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c674d487-4763-48bc-aac4-b820df86baed,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d7af89d1538a26d1970a3908388ed334de81effe1700845c213f40ec1e14560,PodSandboxId:330835474aa04f5de836085108d7dbb6656be99bbcc92bde26cc01585fd9fab6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727088062167853541,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-b
nrp8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 657df601-1bb6-4cc0-8e2f-bab433678183,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c1a0e0be51792c80eddefbf6b897c48b5ffd226e1cc724493428292288dfd4,PodSandboxId:a3eca749513c008b122dfd9ce0c963fdd66a7f84eb97337e9bbc1c458e2db6d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088032144712849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b54319a0-aafe-4e68-addd-71b31e5ccde6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88229ec096945bc87d66e6aee04c7103b5440f1fa32d184367e8688138523d91,PodSandboxId:dfa6cdb106f57cc422652823b445e581d0e3f9ab312054bca0a9e0c3a3af3ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727088032143659634,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kl2g8,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3af3dd94-0d58-45ba-862b-5d94ff669547,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad755cb524b1819a3d2af7aa49674d2ee60b3e8308d5947a2a45eff41de3422f,PodSandboxId:4c9a7677c900bf302b4a928b68448e39cd5831fed53b59af118b66863008ed05,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088032129074327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xlkcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e530bbff-54
d5-48b5-a16d-5bd1d7c5da8d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7e3bcb45a5411a7288d6916873871be99c7a9a8735e57b9f47968aa9e46dd,PodSandboxId:5dc30c48d88dc50580d2ce25e50bf0b26ddf197700a897138c0704087e8a0e76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17270
88028461832566,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf5d091937f7e261ae04cf30b774da9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae06c18d1bf07cacde022d2f62ce81dd6190bb8792558ce2334cac074f7767,PodSandboxId:cdd35295e3c0d88657005a9c421167426ab2e406d6085f9019a93d9cdaba7cff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088028309681048,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f16ee05ca7af7a8498b81222b3afaf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d5a32228a9839e9e21750b3f5a6c02f01be0af463c9ebd8226268d8124432c,PodSandboxId:ecf87791bc88633dbf4213b1967bde2b39367d356a06e1f488c1739fa3ce5f1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088028347362087,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a842605e607159f5ae00a1df02b0dce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d831a5ae2952706a0b202abb91a8499d5e1c2ddd165475f28a91c53341c00a38,PodSandboxId:5480e44fcb6470fbc9a7d40696866e097334668ea6f19f3ed7452679fd1508ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088028333001367,Labels:map[string]string{io
.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c79220d073b9c31a23ce96af02d1b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1818e8ced4ac95b3b06008fd10492e413c274475e6c082e31c45fdcf734bf636,PodSandboxId:06c67af040a6d73a44b72ccee85ed508a1c041f5fde2e20db5364f88e37d1050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727087989543616811,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xlkcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e530bbff-54d5-48b5-a16d-5bd1d7c5da8d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a082847276b61e5d653f39a9222198975faccf1f7adf8efac176bacead8a9ede,PodSandboxId:736b2651e9e25b8dabb03ffa0c46194ef08e2345b29d13ac8b40679dc1a41b78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727087989258930032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kl2g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af3dd94-0d58-45ba-862b-5d94ff669547,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fc13fd1746ac5718fd662dc0b65d716c40a0e3c48b346602fe3818bd822e82,PodSandboxId:a3048fe319722c3c593f20f5ad437f1d95e0833c4d3d89a7dc54ec51084c5cf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727087989190288980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54319a0-aafe-4e68-addd-71b31e5ccde6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb50ba39a2ec1ba8f04f69f0943e085d9a06ccb50c3e72bcd20743804ce804,PodSandboxId:6c5e2395f383746441f2676b2a3c0546a561197966b0fe7bc98d98b435a88361,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727087985464503006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f16ee05ca7af7a8498b81222b3afaf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb792513e21ec026c32fa3c8ac4f7cedc3a0e2b64d0ff8cb1377cef145f80b30,PodSandboxId:bc93a8bdf9299d9ec73c7085a47a52a9cf2382e9dfb74b08885d7596d049981c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471
a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727087985409671091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a842605e607159f5ae00a1df02b0dce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6896a210a1e859cc0afbf4104ea38b9186b6b4fbd8950778899ea80d6723989,PodSandboxId:5d4b73bb2638fe7a65fc74818622c5e968aadd89491cce61b9c9098686307430,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbb
e954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727087985384979949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c79220d073b9c31a23ce96af02d1b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8321f432-bcdb-4dea-bdb8-a0b503f2e43a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.539346212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2880e17-5247-4b92-a74b-5a4189ed0a13 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.539443083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2880e17-5247-4b92-a74b-5a4189ed0a13 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.540394355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6588d76b-6703-42ff-9471-12f112b1c483 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.541214632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088681541191891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6588d76b-6703-42ff-9471-12f112b1c483 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.541589972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3073d43-eb84-41a5-b98e-7b7b7f3696ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.541668626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3073d43-eb84-41a5-b98e-7b7b7f3696ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:51:21 functional-870347 crio[4763]: time="2024-09-23 10:51:21.542092418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4fa705799dbac37ee4d688e10cee38da0f0d6073a3c06aa609e18363b7ac24a,PodSandboxId:846871cb907057a031c5e3d5730570f1d241c142dfa41786c412ba04bafc5c13,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_RUNNING,CreatedAt:1727088094207923345,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5328749a-cfc1-4dda-8d9a-ced26ee5c083,},Annotations:map[string]string{io.kubernetes.container.hash: 18495a64,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892dedf7e677481a428bd1d83ba1cf54a4273e3b687c0d8ee8a054269bd1e09a,PodSandboxId:169ebc43e302d46288ef454ded372f3423ff53ee2b79fe6523ef8cb54cbc5003,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1727088085496556383,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-dbgqp,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c727dfd3-ab53-48e3-8d9e-e80b4d46104b,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports:
[{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec456493ea341d16f320e3e13f78c1135e3e64bf4ae167d3afc5848fd8f16664,PodSandboxId:59d1d6437ebb3a72e2fcd8520980fcc844d85d4ac3c3fac56092d5aaaaaed9ae,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727088074106510374,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-gc8ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7086ffda-bfe3-4d93-afcf-4d51c80b1156,},Annotations:map[string]string{io.kuber
netes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93de6e04e1a9ef0af026598667162bf94036aa63f6e427b7a716c595737b7fa4,PodSandboxId:c49b8098d67cf77bcf58cbbfa1fd4963f450c3d6e00c01d958b377a9fb9b72d1,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1727088073908695275,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-nszv7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e986f31e-4968-4e8e-8
3f1-2eef2ebcd831,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1525d03f296e61064c22049104dd56ea94f5df0608bfbe96925cd9361b1b66,PodSandboxId:56fbe36ae1d05d8534ab625b2d88a5307ab70da2a2c2bc11ed6ea475e27fe1a2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1727088065983705161,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.
kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c674d487-4763-48bc-aac4-b820df86baed,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d7af89d1538a26d1970a3908388ed334de81effe1700845c213f40ec1e14560,PodSandboxId:330835474aa04f5de836085108d7dbb6656be99bbcc92bde26cc01585fd9fab6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1727088062167853541,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-b
nrp8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 657df601-1bb6-4cc0-8e2f-bab433678183,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c1a0e0be51792c80eddefbf6b897c48b5ffd226e1cc724493428292288dfd4,PodSandboxId:a3eca749513c008b122dfd9ce0c963fdd66a7f84eb97337e9bbc1c458e2db6d6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088032144712849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: b54319a0-aafe-4e68-addd-71b31e5ccde6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88229ec096945bc87d66e6aee04c7103b5440f1fa32d184367e8688138523d91,PodSandboxId:dfa6cdb106f57cc422652823b445e581d0e3f9ab312054bca0a9e0c3a3af3ee9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727088032143659634,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kl2g8,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3af3dd94-0d58-45ba-862b-5d94ff669547,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad755cb524b1819a3d2af7aa49674d2ee60b3e8308d5947a2a45eff41de3422f,PodSandboxId:4c9a7677c900bf302b4a928b68448e39cd5831fed53b59af118b66863008ed05,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088032129074327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xlkcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e530bbff-54
d5-48b5-a16d-5bd1d7c5da8d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9a7e3bcb45a5411a7288d6916873871be99c7a9a8735e57b9f47968aa9e46dd,PodSandboxId:5dc30c48d88dc50580d2ce25e50bf0b26ddf197700a897138c0704087e8a0e76,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17270
88028461832566,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf5d091937f7e261ae04cf30b774da9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae06c18d1bf07cacde022d2f62ce81dd6190bb8792558ce2334cac074f7767,PodSandboxId:cdd35295e3c0d88657005a9c421167426ab2e406d6085f9019a93d9cdaba7cff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088028309681048,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f16ee05ca7af7a8498b81222b3afaf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d5a32228a9839e9e21750b3f5a6c02f01be0af463c9ebd8226268d8124432c,PodSandboxId:ecf87791bc88633dbf4213b1967bde2b39367d356a06e1f488c1739fa3ce5f1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088028347362087,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a842605e607159f5ae00a1df02b0dce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d831a5ae2952706a0b202abb91a8499d5e1c2ddd165475f28a91c53341c00a38,PodSandboxId:5480e44fcb6470fbc9a7d40696866e097334668ea6f19f3ed7452679fd1508ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088028333001367,Labels:map[string]string{io
.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c79220d073b9c31a23ce96af02d1b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1818e8ced4ac95b3b06008fd10492e413c274475e6c082e31c45fdcf734bf636,PodSandboxId:06c67af040a6d73a44b72ccee85ed508a1c041f5fde2e20db5364f88e37d1050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727087989543616811,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xlkcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e530bbff-54d5-48b5-a16d-5bd1d7c5da8d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a082847276b61e5d653f39a9222198975faccf1f7adf8efac176bacead8a9ede,PodSandboxId:736b2651e9e25b8dabb03ffa0c46194ef08e2345b29d13ac8b40679dc1a41b78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727087989258930032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kl2g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af3dd94-0d58-45ba-862b-5d94ff669547,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2fc13fd1746ac5718fd662dc0b65d716c40a0e3c48b346602fe3818bd822e82,PodSandboxId:a3048fe319722c3c593f20f5ad437f1d95e0833c4d3d89a7dc54ec51084c5cf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727087989190288980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54319a0-aafe-4e68-addd-71b31e5ccde6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb50ba39a2ec1ba8f04f69f0943e085d9a06ccb50c3e72bcd20743804ce804,PodSandboxId:6c5e2395f383746441f2676b2a3c0546a561197966b0fe7bc98d98b435a88361,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727087985464503006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f16ee05ca7af7a8498b81222b3afaf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb792513e21ec026c32fa3c8ac4f7cedc3a0e2b64d0ff8cb1377cef145f80b30,PodSandboxId:bc93a8bdf9299d9ec73c7085a47a52a9cf2382e9dfb74b08885d7596d049981c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471
a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727087985409671091,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a842605e607159f5ae00a1df02b0dce1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6896a210a1e859cc0afbf4104ea38b9186b6b4fbd8950778899ea80d6723989,PodSandboxId:5d4b73bb2638fe7a65fc74818622c5e968aadd89491cce61b9c9098686307430,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbb
e954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727087985384979949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-870347,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c79220d073b9c31a23ce96af02d1b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3073d43-eb84-41a5-b98e-7b7b7f3696ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a4fa705799dba       docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3            9 minutes ago       Running             myfrontend                  0                   846871cb90705       sp-pod
	892dedf7e6774       115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7                                           9 minutes ago       Running             dashboard-metrics-scraper   0                   169ebc43e302d       dashboard-metrics-scraper-c5db448b4-dbgqp
	ec456493ea341       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                           10 minutes ago      Running             echoserver                  0                   59d1d6437ebb3       hello-node-connect-67bdd5bbb4-gc8ht
	93de6e04e1a9e       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   10 minutes ago      Running             kubernetes-dashboard        0                   c49b8098d67cf       kubernetes-dashboard-695b96c756-nszv7
	be1525d03f296       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e        10 minutes ago      Exited              mount-munger                0                   56fbe36ae1d05       busybox-mount
	2d7af89d1538a       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969         10 minutes ago      Running             echoserver                  0                   330835474aa04       hello-node-6b9f76b5c7-bnrp8
	71c1a0e0be517       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           10 minutes ago      Running             storage-provisioner         2                   a3eca749513c0       storage-provisioner
	88229ec096945       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                           10 minutes ago      Running             kube-proxy                  2                   dfa6cdb106f57       kube-proxy-kl2g8
	ad755cb524b18       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           10 minutes ago      Running             coredns                     2                   4c9a7677c900b       coredns-7c65d6cfc9-xlkcb
	c9a7e3bcb45a5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                           10 minutes ago      Running             kube-apiserver              0                   5dc30c48d88dc       kube-apiserver-functional-870347
	b1d5a32228a98       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                           10 minutes ago      Running             kube-controller-manager     2                   ecf87791bc886       kube-controller-manager-functional-870347
	d831a5ae29527       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                           10 minutes ago      Running             kube-scheduler              2                   5480e44fcb647       kube-scheduler-functional-870347
	43ae06c18d1bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                           10 minutes ago      Running             etcd                        2                   cdd35295e3c0d       etcd-functional-870347
	1818e8ced4ac9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           11 minutes ago      Exited              coredns                     1                   06c67af040a6d       coredns-7c65d6cfc9-xlkcb
	a082847276b61       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                           11 minutes ago      Exited              kube-proxy                  1                   736b2651e9e25       kube-proxy-kl2g8
	e2fc13fd1746a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           11 minutes ago      Exited              storage-provisioner         1                   a3048fe319722       storage-provisioner
	fceb50ba39a2e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                           11 minutes ago      Exited              etcd                        1                   6c5e2395f3837       etcd-functional-870347
	bb792513e21ec       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                           11 minutes ago      Exited              kube-controller-manager     1                   bc93a8bdf9299       kube-controller-manager-functional-870347
	f6896a210a1e8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                           11 minutes ago      Exited              kube-scheduler              1                   5d4b73bb2638f       kube-scheduler-functional-870347
	
	
	==> coredns [1818e8ced4ac95b3b06008fd10492e413c274475e6c082e31c45fdcf734bf636] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36312 - 10220 "HINFO IN 443107971578085427.6215094100085909102. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014668145s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ad755cb524b1819a3d2af7aa49674d2ee60b3e8308d5947a2a45eff41de3422f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36176 - 42396 "HINFO IN 5587452673304153662.2641795758581130429. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01517777s
	
	
	==> describe nodes <==
	Name:               functional-870347
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-870347
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=functional-870347
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_39_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:39:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-870347
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:51:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:47:09 +0000   Mon, 23 Sep 2024 10:39:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:47:09 +0000   Mon, 23 Sep 2024 10:39:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:47:09 +0000   Mon, 23 Sep 2024 10:39:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:47:09 +0000   Mon, 23 Sep 2024 10:39:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    functional-870347
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e6c81839b474986ba573db9954f14b0
	  System UUID:                1e6c8183-9b47-4986-ba57-3db9954f14b0
	  Boot ID:                    50c59064-00f8-42ad-94dc-3cede723fcac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-bnrp8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-67bdd5bbb4-gc8ht          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6cdb49bbb-p9xlv                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 coredns-7c65d6cfc9-xlkcb                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-870347                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-870347             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-870347    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kl2g8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-870347             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-dbgqp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-nszv7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-870347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-870347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-870347 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node functional-870347 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-870347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-870347 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-870347 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           12m                node-controller  Node functional-870347 event: Registered Node functional-870347 in Controller
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-870347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-870347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-870347 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-870347 event: Registered Node functional-870347 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-870347 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-870347 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-870347 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-870347 event: Registered Node functional-870347 in Controller
	
	
	==> dmesg <==
	[  +0.270556] systemd-fstab-generator[2466]: Ignoring "noauto" option for root device
	[  +7.898655] systemd-fstab-generator[2589]: Ignoring "noauto" option for root device
	[  +0.072670] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.987781] systemd-fstab-generator[2710]: Ignoring "noauto" option for root device
	[  +4.534869] kauditd_printk_skb: 74 callbacks suppressed
	[Sep23 10:40] systemd-fstab-generator[3490]: Ignoring "noauto" option for root device
	[  +0.087512] kauditd_printk_skb: 37 callbacks suppressed
	[ +18.637541] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.393063] systemd-fstab-generator[4531]: Ignoring "noauto" option for root device
	[  +0.201888] systemd-fstab-generator[4628]: Ignoring "noauto" option for root device
	[  +0.214037] systemd-fstab-generator[4715]: Ignoring "noauto" option for root device
	[  +0.137073] systemd-fstab-generator[4727]: Ignoring "noauto" option for root device
	[  +0.324967] systemd-fstab-generator[4756]: Ignoring "noauto" option for root device
	[  +0.839056] systemd-fstab-generator[4953]: Ignoring "noauto" option for root device
	[  +2.433110] systemd-fstab-generator[5384]: Ignoring "noauto" option for root device
	[  +0.460629] kauditd_printk_skb: 257 callbacks suppressed
	[  +7.071525] kauditd_printk_skb: 39 callbacks suppressed
	[ +11.738320] systemd-fstab-generator[5968]: Ignoring "noauto" option for root device
	[  +6.232349] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.415363] kauditd_printk_skb: 33 callbacks suppressed
	[Sep23 10:41] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.239998] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.827635] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.016476] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.935548] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [43ae06c18d1bf07cacde022d2f62ce81dd6190bb8792558ce2334cac074f7767] <==
	{"level":"info","ts":"2024-09-23T10:40:28.704633Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T10:40:28.705797Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"22dc5a3adec033ed","local-member-id":"dc6e2f4e9dcc679a","added-peer-id":"dc6e2f4e9dcc679a","added-peer-peer-urls":["https://192.168.39.190:2380"]}
	{"level":"info","ts":"2024-09-23T10:40:28.706063Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"22dc5a3adec033ed","local-member-id":"dc6e2f4e9dcc679a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:40:28.706233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:40:28.706130Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2024-09-23T10:40:30.176008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-23T10:40:30.176051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-23T10:40:30.176086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgPreVoteResp from dc6e2f4e9dcc679a at term 3"}
	{"level":"info","ts":"2024-09-23T10:40:30.176100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became candidate at term 4"}
	{"level":"info","ts":"2024-09-23T10:40:30.176113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgVoteResp from dc6e2f4e9dcc679a at term 4"}
	{"level":"info","ts":"2024-09-23T10:40:30.176122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became leader at term 4"}
	{"level":"info","ts":"2024-09-23T10:40:30.176128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dc6e2f4e9dcc679a elected leader dc6e2f4e9dcc679a at term 4"}
	{"level":"info","ts":"2024-09-23T10:40:30.181439Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:40:30.181531Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:40:30.181567Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:40:30.181290Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dc6e2f4e9dcc679a","local-member-attributes":"{Name:functional-870347 ClientURLs:[https://192.168.39.190:2379]}","request-path":"/0/members/dc6e2f4e9dcc679a/attributes","cluster-id":"22dc5a3adec033ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:40:30.181888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:40:30.182552Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:40:30.183396Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.190:2379"}
	{"level":"info","ts":"2024-09-23T10:40:30.182566Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:40:30.184268Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:42:00.379993Z","caller":"traceutil/trace.go:171","msg":"trace[1375838042] transaction","detail":"{read_only:false; response_revision:902; number_of_response:1; }","duration":"210.675032ms","start":"2024-09-23T10:42:00.169294Z","end":"2024-09-23T10:42:00.379969Z","steps":["trace[1375838042] 'process raft request'  (duration: 209.807311ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:50:30.212477Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2024-09-23T10:50:30.237730Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1071,"took":"24.67159ms","hash":640815911,"current-db-size-bytes":3690496,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1437696,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2024-09-23T10:50:30.237945Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":640815911,"revision":1071,"compact-revision":-1}
	
	
	==> etcd [fceb50ba39a2ec1ba8f04f69f0943e085d9a06ccb50c3e72bcd20743804ce804] <==
	{"level":"info","ts":"2024-09-23T10:39:46.844976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T10:39:46.845014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgPreVoteResp from dc6e2f4e9dcc679a at term 2"}
	{"level":"info","ts":"2024-09-23T10:39:46.845044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T10:39:46.845069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgVoteResp from dc6e2f4e9dcc679a at term 3"}
	{"level":"info","ts":"2024-09-23T10:39:46.845096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became leader at term 3"}
	{"level":"info","ts":"2024-09-23T10:39:46.845147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dc6e2f4e9dcc679a elected leader dc6e2f4e9dcc679a at term 3"}
	{"level":"info","ts":"2024-09-23T10:39:46.852276Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dc6e2f4e9dcc679a","local-member-attributes":"{Name:functional-870347 ClientURLs:[https://192.168.39.190:2379]}","request-path":"/0/members/dc6e2f4e9dcc679a/attributes","cluster-id":"22dc5a3adec033ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:39:46.854970Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:39:46.855960Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:39:46.856710Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.190:2379"}
	{"level":"info","ts":"2024-09-23T10:39:46.858865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:39:46.858896Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:39:46.859721Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:39:46.865202Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:39:46.866034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:40:17.010879Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T10:40:17.018077Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-870347","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	{"level":"warn","ts":"2024-09-23T10:40:17.021503Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T10:40:17.021625Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T10:40:17.101467Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T10:40:17.102052Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T10:40:17.103118Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dc6e2f4e9dcc679a","current-leader-member-id":"dc6e2f4e9dcc679a"}
	{"level":"info","ts":"2024-09-23T10:40:17.106737Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2024-09-23T10:40:17.106910Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2024-09-23T10:40:17.106941Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-870347","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	
	
	==> kernel <==
	 10:51:21 up 12 min,  0 users,  load average: 0.35, 0.31, 0.21
	Linux functional-870347 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c9a7e3bcb45a5411a7288d6916873871be99c7a9a8735e57b9f47968aa9e46dd] <==
	I0923 10:40:31.566799       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 10:40:31.567268       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 10:40:31.571144       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 10:40:31.571365       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 10:40:31.576948       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0923 10:40:31.577127       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 10:40:31.585128       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 10:40:32.371209       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 10:40:33.091274       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 10:40:33.118592       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 10:40:33.166526       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 10:40:33.193468       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 10:40:33.200331       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 10:40:35.030589       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 10:40:35.119025       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 10:40:53.090835       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.204.237"}
	I0923 10:40:57.297027       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 10:40:57.422541       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.121.198"}
	I0923 10:41:00.815937       1 controller.go:615] quota admission added evaluator for: namespaces
	I0923 10:41:01.315137       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.159.178"}
	I0923 10:41:01.374218       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.122.197"}
	I0923 10:41:12.480277       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.158.248"}
	I0923 10:41:20.294401       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.153.34"}
	E0923 10:41:32.073475       1 conn.go:339] Error on socket receive: read tcp 192.168.39.190:8441->192.168.39.1:45664: use of closed network connection
	E0923 10:41:40.788005       1 conn.go:339] Error on socket receive: read tcp 192.168.39.190:8441->192.168.39.1:46076: use of closed network connection
	
	
	==> kube-controller-manager [b1d5a32228a9839e9e21750b3f5a6c02f01be0af463c9ebd8226268d8124432c] <==
	I0923 10:41:01.156008       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="92.865657ms"
	I0923 10:41:01.193821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="69.168483ms"
	I0923 10:41:01.193927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="59.184µs"
	I0923 10:41:01.248829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="92.754175ms"
	I0923 10:41:01.249974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="122.582µs"
	I0923 10:41:03.265868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6b9f76b5c7" duration="9.316625ms"
	I0923 10:41:03.266101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6b9f76b5c7" duration="93.364µs"
	I0923 10:41:12.409706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="42.719842ms"
	I0923 10:41:12.460430       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="50.60056ms"
	I0923 10:41:12.490701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="30.236523ms"
	I0923 10:41:12.490829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="99.053µs"
	I0923 10:41:14.346872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="12.480289ms"
	I0923 10:41:14.347643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="40.74µs"
	I0923 10:41:14.363658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.385319ms"
	I0923 10:41:14.367124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="135.237µs"
	I0923 10:41:20.406462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="29.212951ms"
	I0923 10:41:20.428412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="21.807952ms"
	I0923 10:41:20.473044       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="44.55124ms"
	I0923 10:41:20.473144       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="52.502µs"
	I0923 10:41:25.518233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="349.66µs"
	I0923 10:41:26.521312       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.049953ms"
	I0923 10:41:26.521497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="50.009µs"
	I0923 10:41:32.851985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-870347"
	I0923 10:42:03.502971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-870347"
	I0923 10:47:09.706706       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-870347"
	
	
	==> kube-controller-manager [bb792513e21ec026c32fa3c8ac4f7cedc3a0e2b64d0ff8cb1377cef145f80b30] <==
	I0923 10:39:51.663075       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0923 10:39:51.665847       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0923 10:39:51.667252       1 shared_informer.go:320] Caches are synced for node
	I0923 10:39:51.667297       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0923 10:39:51.667328       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0923 10:39:51.667332       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0923 10:39:51.667336       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0923 10:39:51.667396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-870347"
	I0923 10:39:51.667459       1 shared_informer.go:320] Caches are synced for ephemeral
	I0923 10:39:51.670891       1 shared_informer.go:320] Caches are synced for taint
	I0923 10:39:51.670988       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0923 10:39:51.671054       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-870347"
	I0923 10:39:51.671116       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0923 10:39:51.674652       1 shared_informer.go:320] Caches are synced for persistent volume
	I0923 10:39:51.680433       1 shared_informer.go:320] Caches are synced for PVC protection
	I0923 10:39:51.716662       1 shared_informer.go:320] Caches are synced for namespace
	I0923 10:39:51.761684       1 shared_informer.go:320] Caches are synced for disruption
	I0923 10:39:51.851843       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0923 10:39:51.860410       1 shared_informer.go:320] Caches are synced for cronjob
	I0923 10:39:51.868847       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 10:39:51.881326       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 10:39:51.911736       1 shared_informer.go:320] Caches are synced for crt configmap
	I0923 10:39:52.289248       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 10:39:52.289289       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0923 10:39:52.294966       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [88229ec096945bc87d66e6aee04c7103b5440f1fa32d184367e8688138523d91] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 10:40:32.480209       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 10:40:32.491060       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.190"]
	E0923 10:40:32.491214       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:40:32.548844       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 10:40:32.548897       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 10:40:32.548924       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:40:32.554360       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:40:32.554656       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:40:32.554684       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:40:32.557052       1 config.go:199] "Starting service config controller"
	I0923 10:40:32.557096       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:40:32.557126       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:40:32.557130       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:40:32.557799       1 config.go:328] "Starting node config controller"
	I0923 10:40:32.557855       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:40:32.657979       1 shared_informer.go:320] Caches are synced for node config
	I0923 10:40:32.658034       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:40:32.658068       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a082847276b61e5d653f39a9222198975faccf1f7adf8efac176bacead8a9ede] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 10:39:49.648220       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 10:39:49.663136       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.190"]
	E0923 10:39:49.663222       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:39:49.709069       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 10:39:49.709119       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 10:39:49.709142       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:39:49.711652       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:39:49.712028       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:39:49.712055       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:39:49.713161       1 config.go:199] "Starting service config controller"
	I0923 10:39:49.713246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:39:49.713287       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:39:49.713307       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:39:49.713680       1 config.go:328] "Starting node config controller"
	I0923 10:39:49.713711       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:39:49.813886       1 shared_informer.go:320] Caches are synced for node config
	I0923 10:39:49.814110       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:39:49.814120       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d831a5ae2952706a0b202abb91a8499d5e1c2ddd165475f28a91c53341c00a38] <==
	I0923 10:40:29.278692       1 serving.go:386] Generated self-signed cert in-memory
	W0923 10:40:31.468377       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 10:40:31.468502       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 10:40:31.468903       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 10:40:31.469055       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 10:40:31.515987       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 10:40:31.516148       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:40:31.525206       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 10:40:31.528093       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 10:40:31.528527       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 10:40:31.528122       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 10:40:31.629480       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f6896a210a1e859cc0afbf4104ea38b9186b6b4fbd8950778899ea80d6723989] <==
	I0923 10:39:46.565283       1 serving.go:386] Generated self-signed cert in-memory
	W0923 10:39:48.276373       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 10:39:48.276495       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 10:39:48.276527       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 10:39:48.276550       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 10:39:48.335052       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 10:39:48.335089       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:39:48.352467       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 10:39:48.354718       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 10:39:48.355820       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 10:39:48.356024       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 10:39:48.455502       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 10:40:17.003032       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0923 10:40:17.003110       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0923 10:40:17.003292       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 23 10:49:48 functional-870347 kubelet[5391]: E0923 10:49:48.073153    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088588072725210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:49:48 functional-870347 kubelet[5391]: E0923 10:49:48.073195    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088588072725210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:49:58 functional-870347 kubelet[5391]: E0923 10:49:58.075497    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088598075006536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:49:58 functional-870347 kubelet[5391]: E0923 10:49:58.075525    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088598075006536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:08 functional-870347 kubelet[5391]: E0923 10:50:08.077549    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088608077057438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:08 functional-870347 kubelet[5391]: E0923 10:50:08.077620    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088608077057438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:18 functional-870347 kubelet[5391]: E0923 10:50:18.082074    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088618080192821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:18 functional-870347 kubelet[5391]: E0923 10:50:18.082115    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088618080192821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:27 functional-870347 kubelet[5391]: E0923 10:50:27.947100    5391 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 10:50:27 functional-870347 kubelet[5391]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 10:50:27 functional-870347 kubelet[5391]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 10:50:27 functional-870347 kubelet[5391]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 10:50:27 functional-870347 kubelet[5391]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 10:50:28 functional-870347 kubelet[5391]: E0923 10:50:28.085362    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088628084009597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:28 functional-870347 kubelet[5391]: E0923 10:50:28.085400    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088628084009597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:38 functional-870347 kubelet[5391]: E0923 10:50:38.089034    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088638087851122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:38 functional-870347 kubelet[5391]: E0923 10:50:38.089510    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088638087851122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:48 functional-870347 kubelet[5391]: E0923 10:50:48.092396    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088648091402743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:48 functional-870347 kubelet[5391]: E0923 10:50:48.092418    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088648091402743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:58 functional-870347 kubelet[5391]: E0923 10:50:58.094964    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088658094434433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:50:58 functional-870347 kubelet[5391]: E0923 10:50:58.095052    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088658094434433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:51:08 functional-870347 kubelet[5391]: E0923 10:51:08.097533    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088668097278944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:51:08 functional-870347 kubelet[5391]: E0923 10:51:08.098130    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088668097278944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:51:18 functional-870347 kubelet[5391]: E0923 10:51:18.101109    5391 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088678100602623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:51:18 functional-870347 kubelet[5391]: E0923 10:51:18.101863    5391 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727088678100602623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:260726,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [93de6e04e1a9ef0af026598667162bf94036aa63f6e427b7a716c595737b7fa4] <==
	2024/09/23 10:41:14 Starting overwatch
	2024/09/23 10:41:14 Using namespace: kubernetes-dashboard
	2024/09/23 10:41:14 Using in-cluster config to connect to apiserver
	2024/09/23 10:41:14 Using secret token for csrf signing
	2024/09/23 10:41:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/23 10:41:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/23 10:41:14 Successful initial request to the apiserver, version: v1.31.1
	2024/09/23 10:41:14 Generating JWE encryption key
	2024/09/23 10:41:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/23 10:41:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/23 10:41:14 Initializing JWE encryption key from synchronized object
	2024/09/23 10:41:14 Creating in-cluster Sidecar client
	2024/09/23 10:41:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 10:41:14 Serving insecurely on HTTP port: 9090
	2024/09/23 10:41:44 Successful request to sidecar
	
	
	==> storage-provisioner [71c1a0e0be51792c80eddefbf6b897c48b5ffd226e1cc724493428292288dfd4] <==
	I0923 10:40:32.327857       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:40:32.345115       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:40:32.345179       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:40:49.747026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:40:49.747306       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-870347_437b2f84-b828-4b79-8ce7-d9ff1d3f0498!
	I0923 10:40:49.747459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97341010-0492-42fc-a133-4c62a28dd4a6", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-870347_437b2f84-b828-4b79-8ce7-d9ff1d3f0498 became leader
	I0923 10:40:49.849513       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-870347_437b2f84-b828-4b79-8ce7-d9ff1d3f0498!
	I0923 10:41:04.785455       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0923 10:41:04.785693       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d3c9d74c-dacc-472f-91ac-ea2b1ec79cc3 373 0 2024-09-23 10:39:18 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-23 10:39:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f9e0f410-134c-49c6-8b91-fc1bb2c2e20a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f9e0f410-134c-49c6-8b91-fc1bb2c2e20a 757 0 2024-09-23 10:41:04 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-23 10:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-23 10:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0923 10:41:04.786378       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f9e0f410-134c-49c6-8b91-fc1bb2c2e20a", APIVersion:"v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0923 10:41:04.786799       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f9e0f410-134c-49c6-8b91-fc1bb2c2e20a" provisioned
	I0923 10:41:04.786844       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0923 10:41:04.786971       1 volume_store.go:212] Trying to save persistentvolume "pvc-f9e0f410-134c-49c6-8b91-fc1bb2c2e20a"
	I0923 10:41:04.803514       1 volume_store.go:219] persistentvolume "pvc-f9e0f410-134c-49c6-8b91-fc1bb2c2e20a" saved
	I0923 10:41:04.803696       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f9e0f410-134c-49c6-8b91-fc1bb2c2e20a", APIVersion:"v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f9e0f410-134c-49c6-8b91-fc1bb2c2e20a
	
	
	==> storage-provisioner [e2fc13fd1746ac5718fd662dc0b65d716c40a0e3c48b346602fe3818bd822e82] <==
	I0923 10:39:49.379845       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:39:49.411716       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:39:49.411875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:40:06.814543       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:40:06.814702       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-870347_c0346c55-453c-4406-a052-d19a02e3e74a!
	I0923 10:40:06.815720       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97341010-0492-42fc-a133-4c62a28dd4a6", APIVersion:"v1", ResourceVersion:"527", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-870347_c0346c55-453c-4406-a052-d19a02e3e74a became leader
	I0923 10:40:06.915233       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-870347_c0346c55-453c-4406-a052-d19a02e3e74a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-870347 -n functional-870347
helpers_test.go:261: (dbg) Run:  kubectl --context functional-870347 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-p9xlv
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-870347 describe pod busybox-mount mysql-6cdb49bbb-p9xlv
helpers_test.go:282: (dbg) kubectl --context functional-870347 describe pod busybox-mount mysql-6cdb49bbb-p9xlv:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-870347/192.168.39.190
	Start Time:       Mon, 23 Sep 2024 10:40:59 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://be1525d03f296e61064c22049104dd56ea94f5df0608bfbe96925cd9361b1b66
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 23 Sep 2024 10:41:06 +0000
	      Finished:     Mon, 23 Sep 2024 10:41:06 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kv6mx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kv6mx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-870347
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.836s (4.076s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-p9xlv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-870347/192.168.39.190
	Start Time:       Mon, 23 Sep 2024 10:41:20 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbjw6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tbjw6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-p9xlv to functional-870347

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 node stop m02 -v=7 --alsologtostderr
E0923 10:56:07.694547   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:56:17.936117   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:56:38.418186   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:57:19.379641   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-790780 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.452301741s)

                                                
                                                
-- stdout --
	* Stopping node "ha-790780-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:56:06.538112   29067 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:56:06.538260   29067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:56:06.538270   29067 out.go:358] Setting ErrFile to fd 2...
	I0923 10:56:06.538276   29067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:56:06.538448   29067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:56:06.538708   29067 mustload.go:65] Loading cluster: ha-790780
	I0923 10:56:06.539079   29067 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:56:06.539100   29067 stop.go:39] StopHost: ha-790780-m02
	I0923 10:56:06.539487   29067 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:56:06.539534   29067 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:56:06.555534   29067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0923 10:56:06.556016   29067 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:56:06.556705   29067 main.go:141] libmachine: Using API Version  1
	I0923 10:56:06.556725   29067 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:56:06.557044   29067 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:56:06.559500   29067 out.go:177] * Stopping node "ha-790780-m02"  ...
	I0923 10:56:06.560983   29067 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0923 10:56:06.561022   29067 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:56:06.561272   29067 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0923 10:56:06.561303   29067 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:56:06.564280   29067 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:56:06.564691   29067 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:56:06.564718   29067 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:56:06.564846   29067 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:56:06.565015   29067 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:56:06.565142   29067 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:56:06.565290   29067 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:56:06.648925   29067 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0923 10:56:06.703092   29067 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0923 10:56:06.757556   29067 main.go:141] libmachine: Stopping "ha-790780-m02"...
	I0923 10:56:06.757578   29067 main.go:141] libmachine: (ha-790780-m02) Calling .GetState
	I0923 10:56:06.759115   29067 main.go:141] libmachine: (ha-790780-m02) Calling .Stop
	I0923 10:56:06.762938   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 0/120
	I0923 10:56:07.765121   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 1/120
	I0923 10:56:08.766457   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 2/120
	I0923 10:56:09.767513   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 3/120
	I0923 10:56:10.769757   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 4/120
	I0923 10:56:11.771410   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 5/120
	I0923 10:56:12.772917   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 6/120
	I0923 10:56:13.774300   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 7/120
	I0923 10:56:14.776733   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 8/120
	I0923 10:56:15.778067   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 9/120
	I0923 10:56:16.780419   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 10/120
	I0923 10:56:17.781838   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 11/120
	I0923 10:56:18.784327   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 12/120
	I0923 10:56:19.785825   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 13/120
	I0923 10:56:20.787150   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 14/120
	I0923 10:56:21.789528   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 15/120
	I0923 10:56:22.790813   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 16/120
	I0923 10:56:23.791926   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 17/120
	I0923 10:56:24.793115   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 18/120
	I0923 10:56:25.794518   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 19/120
	I0923 10:56:26.796576   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 20/120
	I0923 10:56:27.797993   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 21/120
	I0923 10:56:28.799142   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 22/120
	I0923 10:56:29.800343   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 23/120
	I0923 10:56:30.801876   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 24/120
	I0923 10:56:31.803757   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 25/120
	I0923 10:56:32.805885   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 26/120
	I0923 10:56:33.807703   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 27/120
	I0923 10:56:34.809842   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 28/120
	I0923 10:56:35.811916   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 29/120
	I0923 10:56:36.813967   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 30/120
	I0923 10:56:37.815261   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 31/120
	I0923 10:56:38.816669   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 32/120
	I0923 10:56:39.818008   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 33/120
	I0923 10:56:40.819251   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 34/120
	I0923 10:56:41.820544   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 35/120
	I0923 10:56:42.821989   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 36/120
	I0923 10:56:43.823842   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 37/120
	I0923 10:56:44.825220   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 38/120
	I0923 10:56:45.826558   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 39/120
	I0923 10:56:46.828734   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 40/120
	I0923 10:56:47.830009   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 41/120
	I0923 10:56:48.831781   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 42/120
	I0923 10:56:49.833070   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 43/120
	I0923 10:56:50.834437   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 44/120
	I0923 10:56:51.836560   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 45/120
	I0923 10:56:52.837914   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 46/120
	I0923 10:56:53.839061   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 47/120
	I0923 10:56:54.840283   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 48/120
	I0923 10:56:55.841526   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 49/120
	I0923 10:56:56.843171   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 50/120
	I0923 10:56:57.844477   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 51/120
	I0923 10:56:58.845818   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 52/120
	I0923 10:56:59.847114   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 53/120
	I0923 10:57:00.848527   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 54/120
	I0923 10:57:01.850521   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 55/120
	I0923 10:57:02.852000   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 56/120
	I0923 10:57:03.853465   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 57/120
	I0923 10:57:04.854654   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 58/120
	I0923 10:57:05.855956   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 59/120
	I0923 10:57:06.858192   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 60/120
	I0923 10:57:07.859553   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 61/120
	I0923 10:57:08.860813   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 62/120
	I0923 10:57:09.862208   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 63/120
	I0923 10:57:10.863931   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 64/120
	I0923 10:57:11.866057   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 65/120
	I0923 10:57:12.867813   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 66/120
	I0923 10:57:13.868999   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 67/120
	I0923 10:57:14.870401   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 68/120
	I0923 10:57:15.871620   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 69/120
	I0923 10:57:16.873174   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 70/120
	I0923 10:57:17.874470   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 71/120
	I0923 10:57:18.875941   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 72/120
	I0923 10:57:19.877237   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 73/120
	I0923 10:57:20.878633   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 74/120
	I0923 10:57:21.880832   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 75/120
	I0923 10:57:22.882261   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 76/120
	I0923 10:57:23.883576   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 77/120
	I0923 10:57:24.885099   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 78/120
	I0923 10:57:25.886380   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 79/120
	I0923 10:57:26.887768   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 80/120
	I0923 10:57:27.889072   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 81/120
	I0923 10:57:28.890409   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 82/120
	I0923 10:57:29.891682   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 83/120
	I0923 10:57:30.893846   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 84/120
	I0923 10:57:31.895839   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 85/120
	I0923 10:57:32.897100   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 86/120
	I0923 10:57:33.898487   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 87/120
	I0923 10:57:34.899829   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 88/120
	I0923 10:57:35.901071   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 89/120
	I0923 10:57:36.902821   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 90/120
	I0923 10:57:37.905112   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 91/120
	I0923 10:57:38.906422   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 92/120
	I0923 10:57:39.907684   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 93/120
	I0923 10:57:40.909300   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 94/120
	I0923 10:57:41.910674   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 95/120
	I0923 10:57:42.911941   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 96/120
	I0923 10:57:43.913287   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 97/120
	I0923 10:57:44.914646   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 98/120
	I0923 10:57:45.915927   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 99/120
	I0923 10:57:46.917956   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 100/120
	I0923 10:57:47.919966   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 101/120
	I0923 10:57:48.921197   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 102/120
	I0923 10:57:49.922504   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 103/120
	I0923 10:57:50.923891   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 104/120
	I0923 10:57:51.925990   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 105/120
	I0923 10:57:52.927789   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 106/120
	I0923 10:57:53.929011   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 107/120
	I0923 10:57:54.930378   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 108/120
	I0923 10:57:55.931917   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 109/120
	I0923 10:57:56.934114   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 110/120
	I0923 10:57:57.935872   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 111/120
	I0923 10:57:58.937214   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 112/120
	I0923 10:57:59.938749   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 113/120
	I0923 10:58:00.940067   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 114/120
	I0923 10:58:01.942308   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 115/120
	I0923 10:58:02.943738   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 116/120
	I0923 10:58:03.945259   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 117/120
	I0923 10:58:04.946604   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 118/120
	I0923 10:58:05.947836   29067 main.go:141] libmachine: (ha-790780-m02) Waiting for machine to stop 119/120
	I0923 10:58:06.948911   29067 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0923 10:58:06.949052   29067 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-790780 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr: (18.712541464s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-790780 -n ha-790780
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 logs -n 25: (1.403725439s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780:/home/docker/cp-test_ha-790780-m03_ha-790780.txt                      |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780 sudo cat                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780.txt                                |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m04 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp testdata/cp-test.txt                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780:/home/docker/cp-test_ha-790780-m04_ha-790780.txt                      |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780 sudo cat                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780.txt                                |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03:/home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m03 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-790780 node stop m02 -v=7                                                    | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:51:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:51:23.890810   24995 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:51:23.891041   24995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:51:23.891049   24995 out.go:358] Setting ErrFile to fd 2...
	I0923 10:51:23.891053   24995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:51:23.891205   24995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:51:23.891746   24995 out.go:352] Setting JSON to false
	I0923 10:51:23.892628   24995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2027,"bootTime":1727086657,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:51:23.892719   24995 start.go:139] virtualization: kvm guest
	I0923 10:51:23.894714   24995 out.go:177] * [ha-790780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:51:23.896009   24995 notify.go:220] Checking for updates...
	I0923 10:51:23.896015   24995 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:51:23.897316   24995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:51:23.898483   24995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:51:23.899745   24995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:23.900930   24995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:51:23.902097   24995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:51:23.903412   24995 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:51:23.936575   24995 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 10:51:23.937738   24995 start.go:297] selected driver: kvm2
	I0923 10:51:23.937760   24995 start.go:901] validating driver "kvm2" against <nil>
	I0923 10:51:23.937777   24995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:51:23.938571   24995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:51:23.938654   24995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 10:51:23.953375   24995 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 10:51:23.953445   24995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:51:23.953711   24995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:51:23.953749   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:51:23.953813   24995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 10:51:23.953825   24995 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:51:23.953893   24995 start.go:340] cluster config:
	{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 10:51:23.954007   24995 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:51:23.956292   24995 out.go:177] * Starting "ha-790780" primary control-plane node in "ha-790780" cluster
	I0923 10:51:23.957482   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:51:23.957517   24995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:51:23.957529   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:51:23.957599   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:51:23.957611   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:51:23.957934   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:51:23.957961   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json: {Name:mk715d227144254f94a596853caa0306f08b9846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:23.958130   24995 start.go:360] acquireMachinesLock for ha-790780: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:51:23.958172   24995 start.go:364] duration metric: took 22.743µs to acquireMachinesLock for "ha-790780"
	I0923 10:51:23.958195   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:51:23.958264   24995 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 10:51:23.959776   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:51:23.959913   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:51:23.959959   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:51:23.974405   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0923 10:51:23.974852   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:51:23.975494   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:51:23.975517   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:51:23.975789   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:51:23.975953   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:23.976064   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:23.976227   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:51:23.976305   24995 client.go:168] LocalClient.Create starting
	I0923 10:51:23.976394   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:51:23.976453   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:51:23.976474   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:51:23.976558   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:51:23.976590   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:51:23.976607   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:51:23.976637   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:51:23.976646   24995 main.go:141] libmachine: (ha-790780) Calling .PreCreateCheck
	I0923 10:51:23.976933   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:23.977298   24995 main.go:141] libmachine: Creating machine...
	I0923 10:51:23.977310   24995 main.go:141] libmachine: (ha-790780) Calling .Create
	I0923 10:51:23.977514   24995 main.go:141] libmachine: (ha-790780) Creating KVM machine...
	I0923 10:51:23.978674   24995 main.go:141] libmachine: (ha-790780) DBG | found existing default KVM network
	I0923 10:51:23.979392   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:23.979247   25018 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0923 10:51:23.979430   24995 main.go:141] libmachine: (ha-790780) DBG | created network xml: 
	I0923 10:51:23.979450   24995 main.go:141] libmachine: (ha-790780) DBG | <network>
	I0923 10:51:23.979460   24995 main.go:141] libmachine: (ha-790780) DBG |   <name>mk-ha-790780</name>
	I0923 10:51:23.979472   24995 main.go:141] libmachine: (ha-790780) DBG |   <dns enable='no'/>
	I0923 10:51:23.979483   24995 main.go:141] libmachine: (ha-790780) DBG |   
	I0923 10:51:23.979494   24995 main.go:141] libmachine: (ha-790780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 10:51:23.979499   24995 main.go:141] libmachine: (ha-790780) DBG |     <dhcp>
	I0923 10:51:23.979504   24995 main.go:141] libmachine: (ha-790780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 10:51:23.979512   24995 main.go:141] libmachine: (ha-790780) DBG |     </dhcp>
	I0923 10:51:23.979520   24995 main.go:141] libmachine: (ha-790780) DBG |   </ip>
	I0923 10:51:23.979526   24995 main.go:141] libmachine: (ha-790780) DBG |   
	I0923 10:51:23.979532   24995 main.go:141] libmachine: (ha-790780) DBG | </network>
	I0923 10:51:23.979541   24995 main.go:141] libmachine: (ha-790780) DBG | 
	I0923 10:51:23.984532   24995 main.go:141] libmachine: (ha-790780) DBG | trying to create private KVM network mk-ha-790780 192.168.39.0/24...
	I0923 10:51:24.046915   24995 main.go:141] libmachine: (ha-790780) DBG | private KVM network mk-ha-790780 192.168.39.0/24 created
	I0923 10:51:24.046951   24995 main.go:141] libmachine: (ha-790780) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 ...
	I0923 10:51:24.046970   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.046901   25018 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:24.046982   24995 main.go:141] libmachine: (ha-790780) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:51:24.047052   24995 main.go:141] libmachine: (ha-790780) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:51:24.290133   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.289993   25018 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa...
	I0923 10:51:24.626743   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.626586   25018 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/ha-790780.rawdisk...
	I0923 10:51:24.626779   24995 main.go:141] libmachine: (ha-790780) DBG | Writing magic tar header
	I0923 10:51:24.626794   24995 main.go:141] libmachine: (ha-790780) DBG | Writing SSH key tar header
	I0923 10:51:24.626805   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.626737   25018 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 ...
	I0923 10:51:24.626913   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 (perms=drwx------)
	I0923 10:51:24.626940   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780
	I0923 10:51:24.626950   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:51:24.626966   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:51:24.626976   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:51:24.626990   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:51:24.627002   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:51:24.627026   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:51:24.627037   24995 main.go:141] libmachine: (ha-790780) Creating domain...
	I0923 10:51:24.627047   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:24.627061   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:51:24.627079   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:51:24.627093   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:51:24.627102   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home
	I0923 10:51:24.627113   24995 main.go:141] libmachine: (ha-790780) DBG | Skipping /home - not owner
	I0923 10:51:24.628104   24995 main.go:141] libmachine: (ha-790780) define libvirt domain using xml: 
	I0923 10:51:24.628127   24995 main.go:141] libmachine: (ha-790780) <domain type='kvm'>
	I0923 10:51:24.628137   24995 main.go:141] libmachine: (ha-790780)   <name>ha-790780</name>
	I0923 10:51:24.628145   24995 main.go:141] libmachine: (ha-790780)   <memory unit='MiB'>2200</memory>
	I0923 10:51:24.628153   24995 main.go:141] libmachine: (ha-790780)   <vcpu>2</vcpu>
	I0923 10:51:24.628162   24995 main.go:141] libmachine: (ha-790780)   <features>
	I0923 10:51:24.628169   24995 main.go:141] libmachine: (ha-790780)     <acpi/>
	I0923 10:51:24.628175   24995 main.go:141] libmachine: (ha-790780)     <apic/>
	I0923 10:51:24.628183   24995 main.go:141] libmachine: (ha-790780)     <pae/>
	I0923 10:51:24.628200   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628210   24995 main.go:141] libmachine: (ha-790780)   </features>
	I0923 10:51:24.628219   24995 main.go:141] libmachine: (ha-790780)   <cpu mode='host-passthrough'>
	I0923 10:51:24.628231   24995 main.go:141] libmachine: (ha-790780)   
	I0923 10:51:24.628242   24995 main.go:141] libmachine: (ha-790780)   </cpu>
	I0923 10:51:24.628248   24995 main.go:141] libmachine: (ha-790780)   <os>
	I0923 10:51:24.628256   24995 main.go:141] libmachine: (ha-790780)     <type>hvm</type>
	I0923 10:51:24.628266   24995 main.go:141] libmachine: (ha-790780)     <boot dev='cdrom'/>
	I0923 10:51:24.628274   24995 main.go:141] libmachine: (ha-790780)     <boot dev='hd'/>
	I0923 10:51:24.628283   24995 main.go:141] libmachine: (ha-790780)     <bootmenu enable='no'/>
	I0923 10:51:24.628289   24995 main.go:141] libmachine: (ha-790780)   </os>
	I0923 10:51:24.628298   24995 main.go:141] libmachine: (ha-790780)   <devices>
	I0923 10:51:24.628316   24995 main.go:141] libmachine: (ha-790780)     <disk type='file' device='cdrom'>
	I0923 10:51:24.628332   24995 main.go:141] libmachine: (ha-790780)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/boot2docker.iso'/>
	I0923 10:51:24.628339   24995 main.go:141] libmachine: (ha-790780)       <target dev='hdc' bus='scsi'/>
	I0923 10:51:24.628343   24995 main.go:141] libmachine: (ha-790780)       <readonly/>
	I0923 10:51:24.628348   24995 main.go:141] libmachine: (ha-790780)     </disk>
	I0923 10:51:24.628352   24995 main.go:141] libmachine: (ha-790780)     <disk type='file' device='disk'>
	I0923 10:51:24.628365   24995 main.go:141] libmachine: (ha-790780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:51:24.628374   24995 main.go:141] libmachine: (ha-790780)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/ha-790780.rawdisk'/>
	I0923 10:51:24.628379   24995 main.go:141] libmachine: (ha-790780)       <target dev='hda' bus='virtio'/>
	I0923 10:51:24.628383   24995 main.go:141] libmachine: (ha-790780)     </disk>
	I0923 10:51:24.628388   24995 main.go:141] libmachine: (ha-790780)     <interface type='network'>
	I0923 10:51:24.628398   24995 main.go:141] libmachine: (ha-790780)       <source network='mk-ha-790780'/>
	I0923 10:51:24.628422   24995 main.go:141] libmachine: (ha-790780)       <model type='virtio'/>
	I0923 10:51:24.628441   24995 main.go:141] libmachine: (ha-790780)     </interface>
	I0923 10:51:24.628451   24995 main.go:141] libmachine: (ha-790780)     <interface type='network'>
	I0923 10:51:24.628456   24995 main.go:141] libmachine: (ha-790780)       <source network='default'/>
	I0923 10:51:24.628464   24995 main.go:141] libmachine: (ha-790780)       <model type='virtio'/>
	I0923 10:51:24.628468   24995 main.go:141] libmachine: (ha-790780)     </interface>
	I0923 10:51:24.628474   24995 main.go:141] libmachine: (ha-790780)     <serial type='pty'>
	I0923 10:51:24.628489   24995 main.go:141] libmachine: (ha-790780)       <target port='0'/>
	I0923 10:51:24.628497   24995 main.go:141] libmachine: (ha-790780)     </serial>
	I0923 10:51:24.628501   24995 main.go:141] libmachine: (ha-790780)     <console type='pty'>
	I0923 10:51:24.628509   24995 main.go:141] libmachine: (ha-790780)       <target type='serial' port='0'/>
	I0923 10:51:24.628513   24995 main.go:141] libmachine: (ha-790780)     </console>
	I0923 10:51:24.628518   24995 main.go:141] libmachine: (ha-790780)     <rng model='virtio'>
	I0923 10:51:24.628524   24995 main.go:141] libmachine: (ha-790780)       <backend model='random'>/dev/random</backend>
	I0923 10:51:24.628536   24995 main.go:141] libmachine: (ha-790780)     </rng>
	I0923 10:51:24.628558   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628571   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628577   24995 main.go:141] libmachine: (ha-790780)   </devices>
	I0923 10:51:24.628588   24995 main.go:141] libmachine: (ha-790780) </domain>
	I0923 10:51:24.628594   24995 main.go:141] libmachine: (ha-790780) 
	I0923 10:51:24.633208   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:13:36:c6 in network default
	I0923 10:51:24.633757   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:24.633774   24995 main.go:141] libmachine: (ha-790780) Ensuring networks are active...
	I0923 10:51:24.634465   24995 main.go:141] libmachine: (ha-790780) Ensuring network default is active
	I0923 10:51:24.634776   24995 main.go:141] libmachine: (ha-790780) Ensuring network mk-ha-790780 is active
	I0923 10:51:24.635311   24995 main.go:141] libmachine: (ha-790780) Getting domain xml...
	I0923 10:51:24.635925   24995 main.go:141] libmachine: (ha-790780) Creating domain...
	I0923 10:51:25.814040   24995 main.go:141] libmachine: (ha-790780) Waiting to get IP...
	I0923 10:51:25.814916   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:25.815340   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:25.815417   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:25.815355   25018 retry.go:31] will retry after 302.426541ms: waiting for machine to come up
	I0923 10:51:26.119886   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.120307   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.120331   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.120269   25018 retry.go:31] will retry after 296.601666ms: waiting for machine to come up
	I0923 10:51:26.418700   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.419028   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.419055   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.418981   25018 retry.go:31] will retry after 377.849162ms: waiting for machine to come up
	I0923 10:51:26.798501   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.798922   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.798948   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.798856   25018 retry.go:31] will retry after 450.118776ms: waiting for machine to come up
	I0923 10:51:27.250394   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:27.250790   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:27.250808   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:27.250758   25018 retry.go:31] will retry after 570.631994ms: waiting for machine to come up
	I0923 10:51:27.822428   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:27.822886   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:27.822908   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:27.822851   25018 retry.go:31] will retry after 623.272262ms: waiting for machine to come up
	I0923 10:51:28.447752   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:28.448147   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:28.448174   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:28.448108   25018 retry.go:31] will retry after 1.077429863s: waiting for machine to come up
	I0923 10:51:29.527061   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:29.527469   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:29.527505   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:29.527430   25018 retry.go:31] will retry after 917.693346ms: waiting for machine to come up
	I0923 10:51:30.446246   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:30.446572   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:30.446596   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:30.446529   25018 retry.go:31] will retry after 1.557196838s: waiting for machine to come up
	I0923 10:51:32.006148   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:32.006519   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:32.006543   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:32.006479   25018 retry.go:31] will retry after 2.085720919s: waiting for machine to come up
	I0923 10:51:34.093658   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:34.094039   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:34.094071   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:34.093997   25018 retry.go:31] will retry after 2.432097525s: waiting for machine to come up
	I0923 10:51:36.529456   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:36.529801   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:36.529829   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:36.529771   25018 retry.go:31] will retry after 3.373414151s: waiting for machine to come up
	I0923 10:51:39.904476   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:39.904832   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:39.904859   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:39.904782   25018 retry.go:31] will retry after 4.54310411s: waiting for machine to come up
	I0923 10:51:44.449079   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.449524   24995 main.go:141] libmachine: (ha-790780) Found IP for machine: 192.168.39.234
	I0923 10:51:44.449566   24995 main.go:141] libmachine: (ha-790780) Reserving static IP address...
	I0923 10:51:44.449583   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has current primary IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.449899   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find host DHCP lease matching {name: "ha-790780", mac: "52:54:00:56:51:7d", ip: "192.168.39.234"} in network mk-ha-790780
	I0923 10:51:44.518563   24995 main.go:141] libmachine: (ha-790780) DBG | Getting to WaitForSSH function...
	I0923 10:51:44.518595   24995 main.go:141] libmachine: (ha-790780) Reserved static IP address: 192.168.39.234
	I0923 10:51:44.518615   24995 main.go:141] libmachine: (ha-790780) Waiting for SSH to be available...
	I0923 10:51:44.520920   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.521300   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.521330   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.521451   24995 main.go:141] libmachine: (ha-790780) DBG | Using SSH client type: external
	I0923 10:51:44.521486   24995 main.go:141] libmachine: (ha-790780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa (-rw-------)
	I0923 10:51:44.521531   24995 main.go:141] libmachine: (ha-790780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:51:44.521546   24995 main.go:141] libmachine: (ha-790780) DBG | About to run SSH command:
	I0923 10:51:44.521554   24995 main.go:141] libmachine: (ha-790780) DBG | exit 0
	I0923 10:51:44.645412   24995 main.go:141] libmachine: (ha-790780) DBG | SSH cmd err, output: <nil>: 
	I0923 10:51:44.645692   24995 main.go:141] libmachine: (ha-790780) KVM machine creation complete!
	I0923 10:51:44.645984   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:44.646583   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:44.646744   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:44.646893   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:51:44.646905   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:51:44.648172   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:51:44.648194   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:51:44.648202   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:51:44.648210   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.650665   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.650987   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.651020   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.651139   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.651308   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.651457   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.651573   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.651700   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.651893   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.651906   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:51:44.756746   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:51:44.756773   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:51:44.756782   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.759344   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.759648   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.759681   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.759843   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.760022   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.760232   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.760420   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.760578   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.760787   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.760799   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:51:44.870171   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:51:44.870267   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:51:44.870273   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:51:44.870280   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:44.870545   24995 buildroot.go:166] provisioning hostname "ha-790780"
	I0923 10:51:44.870571   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:44.870747   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.873216   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.873593   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.873628   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.873723   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.873886   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.874025   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.874142   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.874274   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.874442   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.874453   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780 && echo "ha-790780" | sudo tee /etc/hostname
	I0923 10:51:44.995765   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780
	
	I0923 10:51:44.995787   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.998312   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.998668   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.998696   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.998853   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.999016   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.999145   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.999274   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.999435   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.999654   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.999678   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:51:45.115136   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:51:45.115177   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:51:45.115207   24995 buildroot.go:174] setting up certificates
	I0923 10:51:45.115216   24995 provision.go:84] configureAuth start
	I0923 10:51:45.115226   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:45.115475   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.117929   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.118257   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.118279   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.118435   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.120330   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.120597   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.120620   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.120789   24995 provision.go:143] copyHostCerts
	I0923 10:51:45.120818   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:51:45.120862   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:51:45.120884   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:51:45.120966   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:51:45.121085   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:51:45.121144   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:51:45.121152   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:51:45.121191   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:51:45.121264   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:51:45.121286   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:51:45.121292   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:51:45.121321   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:51:45.121410   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780 san=[127.0.0.1 192.168.39.234 ha-790780 localhost minikube]
	I0923 10:51:45.266715   24995 provision.go:177] copyRemoteCerts
	I0923 10:51:45.266777   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:51:45.266798   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.269666   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.269959   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.269988   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.270213   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.270378   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.270482   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.270568   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.355778   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:51:45.355843   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:51:45.380730   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:51:45.380795   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 10:51:45.414661   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:51:45.414743   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:51:45.441465   24995 provision.go:87] duration metric: took 326.238007ms to configureAuth
	I0923 10:51:45.441495   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:51:45.441678   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:51:45.441758   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.444126   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.444463   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.444481   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.444672   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.444841   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.445006   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.445137   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.445259   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:45.445469   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:45.445484   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:51:45.681011   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:51:45.681063   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:51:45.681071   24995 main.go:141] libmachine: (ha-790780) Calling .GetURL
	I0923 10:51:45.682285   24995 main.go:141] libmachine: (ha-790780) DBG | Using libvirt version 6000000
	I0923 10:51:45.684579   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.684908   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.684938   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.685089   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:51:45.685101   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:51:45.685107   24995 client.go:171] duration metric: took 21.708786455s to LocalClient.Create
	I0923 10:51:45.685125   24995 start.go:167] duration metric: took 21.708900673s to libmachine.API.Create "ha-790780"
	I0923 10:51:45.685138   24995 start.go:293] postStartSetup for "ha-790780" (driver="kvm2")
	I0923 10:51:45.685151   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:51:45.685172   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.685421   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:51:45.685449   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.687596   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.687908   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.687933   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.688073   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.688250   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.688408   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.688548   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.771920   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:51:45.776355   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:51:45.776391   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:51:45.776469   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:51:45.776563   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:51:45.776575   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:51:45.776693   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:51:45.786199   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:51:45.811518   24995 start.go:296] duration metric: took 126.349059ms for postStartSetup
	I0923 10:51:45.811609   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:45.812294   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.815129   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.815486   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.815514   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.815712   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:51:45.815895   24995 start.go:128] duration metric: took 21.857620166s to createHost
	I0923 10:51:45.815920   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.818316   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.818630   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.818651   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.818850   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.819010   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.819165   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.819278   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.819431   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:45.819590   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:45.819599   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:51:45.926174   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088705.899223528
	
	I0923 10:51:45.926195   24995 fix.go:216] guest clock: 1727088705.899223528
	I0923 10:51:45.926202   24995 fix.go:229] Guest: 2024-09-23 10:51:45.899223528 +0000 UTC Remote: 2024-09-23 10:51:45.81591122 +0000 UTC m=+21.959703843 (delta=83.312308ms)
	I0923 10:51:45.926237   24995 fix.go:200] guest clock delta is within tolerance: 83.312308ms
	I0923 10:51:45.926247   24995 start.go:83] releasing machines lock for "ha-790780", held for 21.968060369s
	I0923 10:51:45.926269   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.926484   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.929017   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.929273   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.929296   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.929451   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.929900   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.930074   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.930159   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:51:45.930211   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.930270   24995 ssh_runner.go:195] Run: cat /version.json
	I0923 10:51:45.930294   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.932829   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933159   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.933185   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933203   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933326   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.933490   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.933624   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.933676   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.933701   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933776   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.934053   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.934206   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.934327   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.934455   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:46.030649   24995 ssh_runner.go:195] Run: systemctl --version
	I0923 10:51:46.036429   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:51:46.192093   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:51:46.197962   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:51:46.198029   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:51:46.215140   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:51:46.215162   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:51:46.215243   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:51:46.230784   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:51:46.244349   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:51:46.244409   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:51:46.258034   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:51:46.272100   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:51:46.381469   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:51:46.539101   24995 docker.go:233] disabling docker service ...
	I0923 10:51:46.539174   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:51:46.552908   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:51:46.565651   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:51:46.682294   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:51:46.796364   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:51:46.811412   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:51:46.829576   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:51:46.829645   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.839695   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:51:46.839786   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.849955   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.860106   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.870333   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:51:46.880826   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.891077   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.908248   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.918775   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:51:46.928824   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:51:46.928877   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:51:46.941980   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:51:46.951517   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:51:47.065808   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:51:47.163613   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:51:47.163683   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:51:47.168401   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:51:47.168449   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:51:47.172083   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:51:47.211404   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:51:47.211475   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:51:47.237894   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:51:47.265905   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:51:47.267109   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:47.269676   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:47.269976   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:47.269998   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:47.270189   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:51:47.274345   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:51:47.287451   24995 kubeadm.go:883] updating cluster {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:51:47.287548   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:51:47.287587   24995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:51:47.320493   24995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 10:51:47.320563   24995 ssh_runner.go:195] Run: which lz4
	I0923 10:51:47.324493   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 10:51:47.324590   24995 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 10:51:47.328614   24995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 10:51:47.328641   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 10:51:48.664218   24995 crio.go:462] duration metric: took 1.339658259s to copy over tarball
	I0923 10:51:48.664282   24995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 10:51:50.637991   24995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.973686302s)
	I0923 10:51:50.638022   24995 crio.go:469] duration metric: took 1.973779288s to extract the tarball
	I0923 10:51:50.638029   24995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 10:51:50.675284   24995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:51:50.719521   24995 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:51:50.719546   24995 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:51:50.719554   24995 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.31.1 crio true true} ...
	I0923 10:51:50.719685   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:51:50.719772   24995 ssh_runner.go:195] Run: crio config
	I0923 10:51:50.771719   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:51:50.771741   24995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 10:51:50.771749   24995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:51:50.771771   24995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-790780 NodeName:ha-790780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:51:50.771891   24995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-790780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:51:50.771915   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:51:50.771953   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:51:50.788554   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:51:50.788662   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:51:50.788713   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:51:50.798905   24995 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:51:50.798967   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 10:51:50.808504   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 10:51:50.825113   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:51:50.841896   24995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 10:51:50.858441   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 10:51:50.875731   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:51:50.879691   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:51:50.892112   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:51:51.019767   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:51:51.037039   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.234
	I0923 10:51:51.037069   24995 certs.go:194] generating shared ca certs ...
	I0923 10:51:51.037091   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.037268   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:51:51.037324   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:51:51.037339   24995 certs.go:256] generating profile certs ...
	I0923 10:51:51.037431   24995 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:51:51.037451   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt with IP's: []
	I0923 10:51:51.451020   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt ...
	I0923 10:51:51.451047   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt: {Name:mk7c4e9362162608bb6c01090da1551aaa823d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.451244   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key ...
	I0923 10:51:51.451267   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key: {Name:mkcd6bfa32a894b89017c31deaa26203b3b4a176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.451372   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888
	I0923 10:51:51.451392   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.254]
	I0923 10:51:51.607359   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 ...
	I0923 10:51:51.607386   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888: {Name:mka1f4b6ed48e33311f672d8b442f93c1d7c681f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.607561   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888 ...
	I0923 10:51:51.607580   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888: {Name:mk49e13f50fd1588f0bd343a1960a01127e6eea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.607676   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:51:51.607836   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:51:51.607925   24995 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:51:51.607944   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt with IP's: []
	I0923 10:51:51.677169   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt ...
	I0923 10:51:51.677196   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt: {Name:mkd6d1ef61128b90a97b097c5fd8695ddf079ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.677369   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key ...
	I0923 10:51:51.677400   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key: {Name:mk47fffc62dd3ae10bfeea7ae4b46f34ad5c053e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.677517   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:51:51.677535   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:51:51.677548   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:51:51.677618   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:51:51.677647   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:51:51.677668   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:51:51.677686   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:51:51.677703   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:51:51.677763   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:51:51.677808   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:51:51.677821   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:51:51.677855   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:51:51.677884   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:51:51.677916   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:51:51.677966   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:51:51.678003   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:51.678023   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:51:51.678049   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.679006   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:51:51.705139   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:51:51.728566   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:51:51.751552   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:51:51.775089   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 10:51:51.801987   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:51:51.826155   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:51:51.852767   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:51:51.876344   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:51:51.905311   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:51:51.928779   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:51:51.952260   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:51:51.969409   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:51:51.975384   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:51:51.986501   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.990964   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.991023   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.996747   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:51:52.007942   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:51:52.018842   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.023215   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.023268   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.028919   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:51:52.039648   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:51:52.050482   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.054942   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.054996   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.061057   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:51:52.072692   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:51:52.076951   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:51:52.077018   24995 kubeadm.go:392] StartCluster: {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:51:52.077118   24995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:51:52.077175   24995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:51:52.116347   24995 cri.go:89] found id: ""
	I0923 10:51:52.116428   24995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:51:52.126761   24995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:51:52.140367   24995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:51:52.152008   24995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:51:52.152029   24995 kubeadm.go:157] found existing configuration files:
	
	I0923 10:51:52.152082   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:51:52.162100   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:51:52.162178   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:51:52.172716   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:51:52.182352   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:51:52.182416   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:51:52.192324   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:51:52.201509   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:51:52.201567   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:51:52.211076   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:51:52.220241   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:51:52.220301   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:51:52.229931   24995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:51:52.330228   24995 kubeadm.go:310] W0923 10:51:52.311529     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:51:52.331060   24995 kubeadm.go:310] W0923 10:51:52.312477     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:51:52.439125   24995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:52:03.033231   24995 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:52:03.033332   24995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:52:03.033492   24995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:52:03.033623   24995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:52:03.033751   24995 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:52:03.033844   24995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:52:03.035457   24995 out.go:235]   - Generating certificates and keys ...
	I0923 10:52:03.035550   24995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:52:03.035642   24995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:52:03.035741   24995 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:52:03.035823   24995 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:52:03.035900   24995 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:52:03.035992   24995 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:52:03.036084   24995 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:52:03.036211   24995 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-790780 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0923 10:52:03.036285   24995 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:52:03.036444   24995 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-790780 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0923 10:52:03.036563   24995 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:52:03.036657   24995 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:52:03.036710   24995 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:52:03.036757   24995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:52:03.036842   24995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:52:03.036923   24995 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:52:03.037009   24995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:52:03.037098   24995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:52:03.037182   24995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:52:03.037302   24995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:52:03.037427   24995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:52:03.038904   24995 out.go:235]   - Booting up control plane ...
	I0923 10:52:03.039001   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:52:03.039082   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:52:03.039176   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:52:03.039295   24995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:52:03.039422   24995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:52:03.039482   24995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:52:03.039635   24995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:52:03.039761   24995 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:52:03.039849   24995 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.524673ms
	I0923 10:52:03.039940   24995 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:52:03.040024   24995 kubeadm.go:310] [api-check] The API server is healthy after 5.986201438s
	I0923 10:52:03.040175   24995 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:52:03.040361   24995 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:52:03.040444   24995 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:52:03.040632   24995 kubeadm.go:310] [mark-control-plane] Marking the node ha-790780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:52:03.040704   24995 kubeadm.go:310] [bootstrap-token] Using token: xsoed2.p6r9ib7q4k96hg0w
	I0923 10:52:03.042019   24995 out.go:235]   - Configuring RBAC rules ...
	I0923 10:52:03.042101   24995 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:52:03.042173   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:52:03.042294   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:52:03.042406   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:52:03.042505   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:52:03.042577   24995 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:52:03.042670   24995 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:52:03.042707   24995 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:52:03.042747   24995 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:52:03.042753   24995 kubeadm.go:310] 
	I0923 10:52:03.042801   24995 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:52:03.042807   24995 kubeadm.go:310] 
	I0923 10:52:03.042880   24995 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:52:03.042886   24995 kubeadm.go:310] 
	I0923 10:52:03.042910   24995 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:52:03.042960   24995 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:52:03.043006   24995 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:52:03.043012   24995 kubeadm.go:310] 
	I0923 10:52:03.043055   24995 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:52:03.043062   24995 kubeadm.go:310] 
	I0923 10:52:03.043106   24995 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:52:03.043112   24995 kubeadm.go:310] 
	I0923 10:52:03.043171   24995 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:52:03.043244   24995 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:52:03.043303   24995 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:52:03.043309   24995 kubeadm.go:310] 
	I0923 10:52:03.043383   24995 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:52:03.043484   24995 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:52:03.043504   24995 kubeadm.go:310] 
	I0923 10:52:03.043608   24995 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xsoed2.p6r9ib7q4k96hg0w \
	I0923 10:52:03.043699   24995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 \
	I0923 10:52:03.043719   24995 kubeadm.go:310] 	--control-plane 
	I0923 10:52:03.043725   24995 kubeadm.go:310] 
	I0923 10:52:03.043823   24995 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:52:03.043833   24995 kubeadm.go:310] 
	I0923 10:52:03.043941   24995 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xsoed2.p6r9ib7q4k96hg0w \
	I0923 10:52:03.044037   24995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 
	I0923 10:52:03.044047   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:52:03.044054   24995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 10:52:03.045502   24995 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 10:52:03.046832   24995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 10:52:03.052467   24995 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 10:52:03.052487   24995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 10:52:03.076247   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 10:52:03.444143   24995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:52:03.444243   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:03.444282   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780 minikube.k8s.io/updated_at=2024_09_23T10_52_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=true
	I0923 10:52:03.495007   24995 ops.go:34] apiserver oom_adj: -16
	I0923 10:52:03.592144   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:04.092654   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:04.592338   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:05.092806   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:05.592594   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:06.092195   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:06.201502   24995 kubeadm.go:1113] duration metric: took 2.757318832s to wait for elevateKubeSystemPrivileges
	I0923 10:52:06.201546   24995 kubeadm.go:394] duration metric: took 14.124531532s to StartCluster
	I0923 10:52:06.201569   24995 settings.go:142] acquiring lock: {Name:mka0fc37129eef8f35af2c1a6ddc567156410b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:06.201664   24995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:52:06.202567   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/kubeconfig: {Name:mk40a9897a5577a89be748f874c2066abd769fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:06.202810   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:52:06.202807   24995 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:06.202841   24995 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 10:52:06.202900   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:52:06.202929   24995 addons.go:69] Setting storage-provisioner=true in profile "ha-790780"
	I0923 10:52:06.202937   24995 addons.go:69] Setting default-storageclass=true in profile "ha-790780"
	I0923 10:52:06.202954   24995 addons.go:234] Setting addon storage-provisioner=true in "ha-790780"
	I0923 10:52:06.202961   24995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-790780"
	I0923 10:52:06.202988   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:06.203012   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:06.203296   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.203334   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.203433   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.203475   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.218688   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0923 10:52:06.218748   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42755
	I0923 10:52:06.219240   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.219291   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.219815   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.219816   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.219840   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.219858   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.220231   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.220235   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.220427   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.220753   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.220795   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.222626   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:52:06.222971   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 10:52:06.223539   24995 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 10:52:06.223901   24995 addons.go:234] Setting addon default-storageclass=true in "ha-790780"
	I0923 10:52:06.223946   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:06.224319   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.224365   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.236739   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0923 10:52:06.237265   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.237749   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.237769   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.238124   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.238287   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.238667   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
	I0923 10:52:06.239113   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.239656   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.239679   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.239955   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.239993   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:06.240401   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.240443   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.241840   24995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:52:06.243145   24995 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:52:06.243160   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:52:06.243172   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:06.246249   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.246639   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:06.246666   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.246813   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:06.246982   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:06.247123   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:06.247259   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:06.256004   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0923 10:52:06.256499   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.256973   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.256999   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.257343   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.257522   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.259210   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:06.259387   24995 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:52:06.259399   24995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:52:06.259412   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:06.262267   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.262666   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:06.262687   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.262832   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:06.262990   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:06.263138   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:06.263273   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:06.304503   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:52:06.398460   24995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:52:06.446811   24995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:52:06.632495   24995 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 10:52:06.919542   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919563   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919636   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919658   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919873   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.919902   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.919910   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.919919   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919926   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919965   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.920081   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920099   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920119   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920133   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920197   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.920208   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.920378   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920390   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920407   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.920451   24995 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 10:52:06.920471   24995 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 10:52:06.920600   24995 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 10:52:06.920610   24995 round_trippers.go:469] Request Headers:
	I0923 10:52:06.920623   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:52:06.920629   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:52:06.937923   24995 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0923 10:52:06.938595   24995 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 10:52:06.938612   24995 round_trippers.go:469] Request Headers:
	I0923 10:52:06.938621   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:52:06.938629   24995 round_trippers.go:473]     Content-Type: application/json
	I0923 10:52:06.938632   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:52:06.947896   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:52:06.948322   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.948337   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.948594   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.948617   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.948630   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.950152   24995 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 10:52:06.951554   24995 addons.go:510] duration metric: took 748.719933ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 10:52:06.951590   24995 start.go:246] waiting for cluster config update ...
	I0923 10:52:06.951605   24995 start.go:255] writing updated cluster config ...
	I0923 10:52:06.953365   24995 out.go:201] 
	I0923 10:52:06.954972   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:06.955040   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:06.956615   24995 out.go:177] * Starting "ha-790780-m02" control-plane node in "ha-790780" cluster
	I0923 10:52:06.957684   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:52:06.957708   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:52:06.957808   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:52:06.957819   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:52:06.957884   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:06.958050   24995 start.go:360] acquireMachinesLock for ha-790780-m02: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:52:06.958105   24995 start.go:364] duration metric: took 32.264µs to acquireMachinesLock for "ha-790780-m02"
	I0923 10:52:06.958126   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:06.958191   24995 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0923 10:52:06.959878   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:52:06.959980   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.960026   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.976035   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0923 10:52:06.976582   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.977118   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.977143   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.977540   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.977757   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:06.977903   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:06.978091   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:52:06.978121   24995 client.go:168] LocalClient.Create starting
	I0923 10:52:06.978164   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:52:06.978206   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:52:06.978227   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:52:06.978286   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:52:06.978303   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:52:06.978310   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:52:06.978324   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:52:06.978329   24995 main.go:141] libmachine: (ha-790780-m02) Calling .PreCreateCheck
	I0923 10:52:06.978542   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:06.978925   24995 main.go:141] libmachine: Creating machine...
	I0923 10:52:06.978941   24995 main.go:141] libmachine: (ha-790780-m02) Calling .Create
	I0923 10:52:06.979102   24995 main.go:141] libmachine: (ha-790780-m02) Creating KVM machine...
	I0923 10:52:06.980456   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found existing default KVM network
	I0923 10:52:06.980575   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found existing private KVM network mk-ha-790780
	I0923 10:52:06.980736   24995 main.go:141] libmachine: (ha-790780-m02) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 ...
	I0923 10:52:06.980762   24995 main.go:141] libmachine: (ha-790780-m02) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:52:06.980809   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:06.980717   25359 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:52:06.980894   24995 main.go:141] libmachine: (ha-790780-m02) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:52:07.232203   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.232068   25359 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa...
	I0923 10:52:07.333393   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.333263   25359 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/ha-790780-m02.rawdisk...
	I0923 10:52:07.333421   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Writing magic tar header
	I0923 10:52:07.333438   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Writing SSH key tar header
	I0923 10:52:07.333446   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.333398   25359 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 ...
	I0923 10:52:07.333511   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02
	I0923 10:52:07.333532   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:52:07.333540   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 (perms=drwx------)
	I0923 10:52:07.333557   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:52:07.333571   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:52:07.333582   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:52:07.333598   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:52:07.333609   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:52:07.333623   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:52:07.333638   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:52:07.333647   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:52:07.333658   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:52:07.333669   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home
	I0923 10:52:07.333679   24995 main.go:141] libmachine: (ha-790780-m02) Creating domain...
	I0923 10:52:07.333718   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Skipping /home - not owner
	I0923 10:52:07.334599   24995 main.go:141] libmachine: (ha-790780-m02) define libvirt domain using xml: 
	I0923 10:52:07.334622   24995 main.go:141] libmachine: (ha-790780-m02) <domain type='kvm'>
	I0923 10:52:07.334660   24995 main.go:141] libmachine: (ha-790780-m02)   <name>ha-790780-m02</name>
	I0923 10:52:07.334682   24995 main.go:141] libmachine: (ha-790780-m02)   <memory unit='MiB'>2200</memory>
	I0923 10:52:07.334692   24995 main.go:141] libmachine: (ha-790780-m02)   <vcpu>2</vcpu>
	I0923 10:52:07.334705   24995 main.go:141] libmachine: (ha-790780-m02)   <features>
	I0923 10:52:07.334717   24995 main.go:141] libmachine: (ha-790780-m02)     <acpi/>
	I0923 10:52:07.334724   24995 main.go:141] libmachine: (ha-790780-m02)     <apic/>
	I0923 10:52:07.334732   24995 main.go:141] libmachine: (ha-790780-m02)     <pae/>
	I0923 10:52:07.334741   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.334753   24995 main.go:141] libmachine: (ha-790780-m02)   </features>
	I0923 10:52:07.334764   24995 main.go:141] libmachine: (ha-790780-m02)   <cpu mode='host-passthrough'>
	I0923 10:52:07.334772   24995 main.go:141] libmachine: (ha-790780-m02)   
	I0923 10:52:07.334781   24995 main.go:141] libmachine: (ha-790780-m02)   </cpu>
	I0923 10:52:07.334789   24995 main.go:141] libmachine: (ha-790780-m02)   <os>
	I0923 10:52:07.334798   24995 main.go:141] libmachine: (ha-790780-m02)     <type>hvm</type>
	I0923 10:52:07.334807   24995 main.go:141] libmachine: (ha-790780-m02)     <boot dev='cdrom'/>
	I0923 10:52:07.334816   24995 main.go:141] libmachine: (ha-790780-m02)     <boot dev='hd'/>
	I0923 10:52:07.334823   24995 main.go:141] libmachine: (ha-790780-m02)     <bootmenu enable='no'/>
	I0923 10:52:07.334834   24995 main.go:141] libmachine: (ha-790780-m02)   </os>
	I0923 10:52:07.334842   24995 main.go:141] libmachine: (ha-790780-m02)   <devices>
	I0923 10:52:07.334853   24995 main.go:141] libmachine: (ha-790780-m02)     <disk type='file' device='cdrom'>
	I0923 10:52:07.334882   24995 main.go:141] libmachine: (ha-790780-m02)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/boot2docker.iso'/>
	I0923 10:52:07.334904   24995 main.go:141] libmachine: (ha-790780-m02)       <target dev='hdc' bus='scsi'/>
	I0923 10:52:07.334913   24995 main.go:141] libmachine: (ha-790780-m02)       <readonly/>
	I0923 10:52:07.334923   24995 main.go:141] libmachine: (ha-790780-m02)     </disk>
	I0923 10:52:07.334932   24995 main.go:141] libmachine: (ha-790780-m02)     <disk type='file' device='disk'>
	I0923 10:52:07.334946   24995 main.go:141] libmachine: (ha-790780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:52:07.334959   24995 main.go:141] libmachine: (ha-790780-m02)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/ha-790780-m02.rawdisk'/>
	I0923 10:52:07.334968   24995 main.go:141] libmachine: (ha-790780-m02)       <target dev='hda' bus='virtio'/>
	I0923 10:52:07.334978   24995 main.go:141] libmachine: (ha-790780-m02)     </disk>
	I0923 10:52:07.334987   24995 main.go:141] libmachine: (ha-790780-m02)     <interface type='network'>
	I0923 10:52:07.334997   24995 main.go:141] libmachine: (ha-790780-m02)       <source network='mk-ha-790780'/>
	I0923 10:52:07.335007   24995 main.go:141] libmachine: (ha-790780-m02)       <model type='virtio'/>
	I0923 10:52:07.335023   24995 main.go:141] libmachine: (ha-790780-m02)     </interface>
	I0923 10:52:07.335035   24995 main.go:141] libmachine: (ha-790780-m02)     <interface type='network'>
	I0923 10:52:07.335044   24995 main.go:141] libmachine: (ha-790780-m02)       <source network='default'/>
	I0923 10:52:07.335058   24995 main.go:141] libmachine: (ha-790780-m02)       <model type='virtio'/>
	I0923 10:52:07.335109   24995 main.go:141] libmachine: (ha-790780-m02)     </interface>
	I0923 10:52:07.335132   24995 main.go:141] libmachine: (ha-790780-m02)     <serial type='pty'>
	I0923 10:52:07.335143   24995 main.go:141] libmachine: (ha-790780-m02)       <target port='0'/>
	I0923 10:52:07.335158   24995 main.go:141] libmachine: (ha-790780-m02)     </serial>
	I0923 10:52:07.335174   24995 main.go:141] libmachine: (ha-790780-m02)     <console type='pty'>
	I0923 10:52:07.335192   24995 main.go:141] libmachine: (ha-790780-m02)       <target type='serial' port='0'/>
	I0923 10:52:07.335204   24995 main.go:141] libmachine: (ha-790780-m02)     </console>
	I0923 10:52:07.335212   24995 main.go:141] libmachine: (ha-790780-m02)     <rng model='virtio'>
	I0923 10:52:07.335225   24995 main.go:141] libmachine: (ha-790780-m02)       <backend model='random'>/dev/random</backend>
	I0923 10:52:07.335234   24995 main.go:141] libmachine: (ha-790780-m02)     </rng>
	I0923 10:52:07.335249   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.335266   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.335277   24995 main.go:141] libmachine: (ha-790780-m02)   </devices>
	I0923 10:52:07.335286   24995 main.go:141] libmachine: (ha-790780-m02) </domain>
	I0923 10:52:07.335295   24995 main.go:141] libmachine: (ha-790780-m02) 
	I0923 10:52:07.341524   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:71:94:5b in network default
	I0923 10:52:07.342077   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring networks are active...
	I0923 10:52:07.342095   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:07.342878   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring network default is active
	I0923 10:52:07.343243   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring network mk-ha-790780 is active
	I0923 10:52:07.343596   24995 main.go:141] libmachine: (ha-790780-m02) Getting domain xml...
	I0923 10:52:07.344221   24995 main.go:141] libmachine: (ha-790780-m02) Creating domain...
	I0923 10:52:08.567103   24995 main.go:141] libmachine: (ha-790780-m02) Waiting to get IP...
	I0923 10:52:08.567991   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:08.568397   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:08.568451   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:08.568387   25359 retry.go:31] will retry after 271.175765ms: waiting for machine to come up
	I0923 10:52:08.840977   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:08.841448   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:08.841471   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:08.841414   25359 retry.go:31] will retry after 362.305584ms: waiting for machine to come up
	I0923 10:52:09.205937   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:09.206493   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:09.206603   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:09.206454   25359 retry.go:31] will retry after 321.793905ms: waiting for machine to come up
	I0923 10:52:09.529876   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:09.530376   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:09.530401   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:09.530327   25359 retry.go:31] will retry after 559.183772ms: waiting for machine to come up
	I0923 10:52:10.091098   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:10.091500   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:10.091524   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:10.091457   25359 retry.go:31] will retry after 578.148121ms: waiting for machine to come up
	I0923 10:52:10.671087   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:10.671615   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:10.671645   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:10.671580   25359 retry.go:31] will retry after 633.076035ms: waiting for machine to come up
	I0923 10:52:11.306241   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:11.306681   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:11.306701   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:11.306639   25359 retry.go:31] will retry after 1.109332207s: waiting for machine to come up
	I0923 10:52:12.417432   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:12.417916   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:12.417942   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:12.417872   25359 retry.go:31] will retry after 1.294744351s: waiting for machine to come up
	I0923 10:52:13.713819   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:13.714303   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:13.714329   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:13.714250   25359 retry.go:31] will retry after 1.531952439s: waiting for machine to come up
	I0923 10:52:15.247542   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:15.248025   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:15.248057   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:15.247975   25359 retry.go:31] will retry after 1.941306258s: waiting for machine to come up
	I0923 10:52:17.190839   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:17.191321   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:17.191351   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:17.191284   25359 retry.go:31] will retry after 2.353774872s: waiting for machine to come up
	I0923 10:52:19.546668   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:19.547031   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:19.547055   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:19.546983   25359 retry.go:31] will retry after 2.747965423s: waiting for machine to come up
	I0923 10:52:22.297443   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:22.297864   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:22.297889   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:22.297821   25359 retry.go:31] will retry after 4.500988279s: waiting for machine to come up
	I0923 10:52:26.799947   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:26.800373   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:26.800398   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:26.800337   25359 retry.go:31] will retry after 3.653543746s: waiting for machine to come up
	I0923 10:52:30.458551   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.459044   24995 main.go:141] libmachine: (ha-790780-m02) Found IP for machine: 192.168.39.43
	I0923 10:52:30.459067   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.459075   24995 main.go:141] libmachine: (ha-790780-m02) Reserving static IP address...
	I0923 10:52:30.459483   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find host DHCP lease matching {name: "ha-790780-m02", mac: "52:54:00:6f:fc:60", ip: "192.168.39.43"} in network mk-ha-790780
	I0923 10:52:30.533257   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Getting to WaitForSSH function...
	I0923 10:52:30.533288   24995 main.go:141] libmachine: (ha-790780-m02) Reserved static IP address: 192.168.39.43
	I0923 10:52:30.533301   24995 main.go:141] libmachine: (ha-790780-m02) Waiting for SSH to be available...
	I0923 10:52:30.536138   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.536313   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780
	I0923 10:52:30.536335   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find defined IP address of network mk-ha-790780 interface with MAC address 52:54:00:6f:fc:60
	I0923 10:52:30.536505   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH client type: external
	I0923 10:52:30.536532   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa (-rw-------)
	I0923 10:52:30.536568   24995 main.go:141] libmachine: (ha-790780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:52:30.536590   24995 main.go:141] libmachine: (ha-790780-m02) DBG | About to run SSH command:
	I0923 10:52:30.536606   24995 main.go:141] libmachine: (ha-790780-m02) DBG | exit 0
	I0923 10:52:30.540119   24995 main.go:141] libmachine: (ha-790780-m02) DBG | SSH cmd err, output: exit status 255: 
	I0923 10:52:30.540140   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0923 10:52:30.540147   24995 main.go:141] libmachine: (ha-790780-m02) DBG | command : exit 0
	I0923 10:52:30.540151   24995 main.go:141] libmachine: (ha-790780-m02) DBG | err     : exit status 255
	I0923 10:52:30.540162   24995 main.go:141] libmachine: (ha-790780-m02) DBG | output  : 
	I0923 10:52:33.541623   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Getting to WaitForSSH function...
	I0923 10:52:33.544182   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.544547   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.544574   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.544757   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH client type: external
	I0923 10:52:33.544784   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa (-rw-------)
	I0923 10:52:33.544814   24995 main.go:141] libmachine: (ha-790780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:52:33.544831   24995 main.go:141] libmachine: (ha-790780-m02) DBG | About to run SSH command:
	I0923 10:52:33.544854   24995 main.go:141] libmachine: (ha-790780-m02) DBG | exit 0
	I0923 10:52:33.669504   24995 main.go:141] libmachine: (ha-790780-m02) DBG | SSH cmd err, output: <nil>: 
	I0923 10:52:33.669774   24995 main.go:141] libmachine: (ha-790780-m02) KVM machine creation complete!
	I0923 10:52:33.670110   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:33.670656   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:33.670934   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:33.671133   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:52:33.671150   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetState
	I0923 10:52:33.672305   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:52:33.672319   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:52:33.672324   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:52:33.672329   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.674474   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.674819   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.674843   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.674997   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.675174   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.675328   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.675465   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.675610   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.675839   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.675852   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:52:33.776748   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:52:33.776774   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:52:33.776785   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.779405   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.779751   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.779783   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.779884   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.780088   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.780269   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.780419   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.780568   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.780760   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.780773   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:52:33.882210   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:52:33.882291   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:52:33.882305   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:52:33.882314   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:33.882575   24995 buildroot.go:166] provisioning hostname "ha-790780-m02"
	I0923 10:52:33.882600   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:33.882773   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.885308   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.885642   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.885677   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.885853   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.886030   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.886155   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.886300   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.886430   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.886626   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.886642   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780-m02 && echo "ha-790780-m02" | sudo tee /etc/hostname
	I0923 10:52:34.003577   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780-m02
	
	I0923 10:52:34.003598   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.006028   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.006433   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.006454   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.006632   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.006821   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.006980   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.007139   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.007310   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.007465   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.007480   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:52:34.118625   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:52:34.118662   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:52:34.118683   24995 buildroot.go:174] setting up certificates
	I0923 10:52:34.118696   24995 provision.go:84] configureAuth start
	I0923 10:52:34.118714   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:34.118982   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.121671   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.122010   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.122038   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.122133   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.124342   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.124650   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.124675   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.124825   24995 provision.go:143] copyHostCerts
	I0923 10:52:34.124854   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:52:34.124893   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:52:34.124906   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:52:34.124985   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:52:34.125072   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:52:34.125097   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:52:34.125107   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:52:34.125144   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:52:34.125212   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:52:34.125235   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:52:34.125242   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:52:34.125281   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:52:34.125349   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780-m02 san=[127.0.0.1 192.168.39.43 ha-790780-m02 localhost minikube]
	I0923 10:52:34.193891   24995 provision.go:177] copyRemoteCerts
	I0923 10:52:34.193957   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:52:34.193986   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.196570   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.196865   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.196889   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.197016   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.197136   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.197266   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.197369   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.281916   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:52:34.281976   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:52:34.308044   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:52:34.308105   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:52:34.333433   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:52:34.333520   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:52:34.360112   24995 provision.go:87] duration metric: took 241.398124ms to configureAuth
	I0923 10:52:34.360147   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:52:34.360368   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:34.360455   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.363054   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.363373   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.363404   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.363563   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.363803   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.363983   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.364144   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.364318   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.364480   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.364494   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:52:34.591141   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:52:34.591170   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:52:34.591177   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetURL
	I0923 10:52:34.592369   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using libvirt version 6000000
	I0923 10:52:34.594796   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.595094   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.595121   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.595270   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:52:34.595283   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:52:34.595290   24995 client.go:171] duration metric: took 27.617159251s to LocalClient.Create
	I0923 10:52:34.595315   24995 start.go:167] duration metric: took 27.61722609s to libmachine.API.Create "ha-790780"
	I0923 10:52:34.595328   24995 start.go:293] postStartSetup for "ha-790780-m02" (driver="kvm2")
	I0923 10:52:34.595341   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:52:34.595379   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.595602   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:52:34.595632   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.597589   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.597898   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.597926   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.598021   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.598195   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.598358   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.598520   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.684195   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:52:34.689242   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:52:34.689272   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:52:34.689348   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:52:34.689459   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:52:34.689471   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:52:34.689556   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:52:34.700320   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:52:34.725191   24995 start.go:296] duration metric: took 129.850231ms for postStartSetup
	I0923 10:52:34.725244   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:34.725799   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.728545   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.728886   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.728913   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.729093   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:34.729294   24995 start.go:128] duration metric: took 27.771090928s to createHost
	I0923 10:52:34.729314   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.731286   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.731644   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.731669   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.731823   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.731990   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.732151   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.732281   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.732440   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.732637   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.732658   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:52:34.834231   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088754.794402068
	
	I0923 10:52:34.834249   24995 fix.go:216] guest clock: 1727088754.794402068
	I0923 10:52:34.834255   24995 fix.go:229] Guest: 2024-09-23 10:52:34.794402068 +0000 UTC Remote: 2024-09-23 10:52:34.729306022 +0000 UTC m=+70.873098644 (delta=65.096046ms)
	I0923 10:52:34.834270   24995 fix.go:200] guest clock delta is within tolerance: 65.096046ms
	I0923 10:52:34.834274   24995 start.go:83] releasing machines lock for "ha-790780-m02", held for 27.876160912s
	I0923 10:52:34.834293   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.834511   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.837173   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.837494   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.837520   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.839594   24995 out.go:177] * Found network options:
	I0923 10:52:34.840920   24995 out.go:177]   - NO_PROXY=192.168.39.234
	W0923 10:52:34.842074   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:52:34.842099   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842612   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842764   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842853   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:52:34.842888   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	W0923 10:52:34.842903   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:52:34.842968   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:52:34.842983   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.845348   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845558   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845701   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.845723   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845847   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.845942   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.845969   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.846014   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.846122   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.846203   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.846268   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.846323   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.846389   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.846494   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:35.081176   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:52:35.087607   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:52:35.087663   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:52:35.103528   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:52:35.103555   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:52:35.103622   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:52:35.120834   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:52:35.135839   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:52:35.135902   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:52:35.150051   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:52:35.166191   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:52:35.300053   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:52:35.467434   24995 docker.go:233] disabling docker service ...
	I0923 10:52:35.467505   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:52:35.481901   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:52:35.494845   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:52:35.623420   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:52:35.753868   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:52:35.768422   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:52:35.787586   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:52:35.787649   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.799053   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:52:35.799126   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.810558   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.821594   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.832724   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:52:35.843898   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.855726   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.873592   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.884110   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:52:35.893791   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:52:35.893856   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:52:35.906807   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:52:35.916973   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:52:36.035527   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:52:36.128791   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:52:36.128861   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:52:36.133474   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:52:36.133527   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:52:36.137009   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:52:36.176502   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:52:36.176587   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:52:36.204178   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:52:36.234043   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:52:36.235621   24995 out.go:177]   - env NO_PROXY=192.168.39.234
	I0923 10:52:36.236738   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:36.239083   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:36.239451   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:36.239480   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:36.239678   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:52:36.243606   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:52:36.255882   24995 mustload.go:65] Loading cluster: ha-790780
	I0923 10:52:36.256081   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:36.256374   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:36.256416   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:36.270776   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0923 10:52:36.271240   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:36.271692   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:36.271718   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:36.271991   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:36.272238   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:36.273724   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:36.274034   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:36.274069   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:36.288288   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0923 10:52:36.288706   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:36.289138   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:36.289156   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:36.289414   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:36.289558   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:36.289677   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.43
	I0923 10:52:36.289688   24995 certs.go:194] generating shared ca certs ...
	I0923 10:52:36.289705   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.289819   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:52:36.289854   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:52:36.289863   24995 certs.go:256] generating profile certs ...
	I0923 10:52:36.289959   24995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:52:36.289984   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0
	I0923 10:52:36.289997   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.43 192.168.39.254]
	I0923 10:52:36.380163   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 ...
	I0923 10:52:36.380191   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0: {Name:mkcca314f563c49b9f271f2aa6db3e6f62b32cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.380347   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0 ...
	I0923 10:52:36.380359   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0: {Name:mkec241aeb6bb82c01cd41cf66da0be3a70fdccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.380434   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:52:36.380560   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:52:36.380681   24995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:52:36.380695   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:52:36.380707   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:52:36.380720   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:52:36.380735   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:52:36.380747   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:52:36.380759   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:52:36.380771   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:52:36.380783   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:52:36.380831   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:52:36.380860   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:52:36.380869   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:52:36.380891   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:52:36.380911   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:52:36.380932   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:52:36.380968   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:52:36.380992   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.381005   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.381017   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:52:36.381045   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:36.384036   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:36.384404   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:36.384430   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:36.384577   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:36.384750   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:36.384881   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:36.384987   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:36.457700   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 10:52:36.466345   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 10:52:36.478344   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 10:52:36.483561   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 10:52:36.494070   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 10:52:36.498527   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 10:52:36.509289   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 10:52:36.514499   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 10:52:36.524608   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 10:52:36.528591   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 10:52:36.538971   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 10:52:36.542839   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0923 10:52:36.553841   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:52:36.579371   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:52:36.604546   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:52:36.628677   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:52:36.653097   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 10:52:36.680685   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:52:36.705242   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:52:36.729370   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:52:36.752651   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:52:36.776422   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:52:36.799568   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:52:36.823834   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 10:52:36.840782   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 10:52:36.857346   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 10:52:36.873712   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 10:52:36.889839   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 10:52:36.905626   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0923 10:52:36.921660   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 10:52:36.938136   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:52:36.943716   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:52:36.953982   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.958476   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.958521   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.964147   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:52:36.974525   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:52:36.985437   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.989845   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.989893   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.995312   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:52:37.005409   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:52:37.015583   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.019922   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.019974   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.025448   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:52:37.035595   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:52:37.039362   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:52:37.039415   24995 kubeadm.go:934] updating node {m02 192.168.39.43 8443 v1.31.1 crio true true} ...
	I0923 10:52:37.039492   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:52:37.039513   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:52:37.039552   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:52:37.055529   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:52:37.055596   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:52:37.055650   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:52:37.065414   24995 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:52:37.065472   24995 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:52:37.075491   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:52:37.075506   24995 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0923 10:52:37.075520   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:52:37.075497   24995 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0923 10:52:37.075574   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:52:37.080294   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 10:52:37.080325   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:52:38.529041   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:52:38.529117   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:52:38.533986   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 10:52:38.534028   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:52:39.337289   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:52:39.353663   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:52:39.353773   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:52:39.358145   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 10:52:39.358182   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:52:39.672771   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 10:52:39.682637   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 10:52:39.699260   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:52:39.715572   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 10:52:39.732521   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:52:39.736488   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:52:39.748539   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:52:39.875794   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:52:39.893533   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:39.893887   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:39.893927   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:39.908489   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45729
	I0923 10:52:39.908913   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:39.909435   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:39.909466   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:39.909786   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:39.909988   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:39.910172   24995 start.go:317] joinCluster: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:52:39.910308   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 10:52:39.910342   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:39.913308   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:39.913748   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:39.913778   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:39.913955   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:39.914131   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:39.914260   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:39.914383   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:40.061073   24995 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:40.061122   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d9ei0t.d7gczbf91ghyxy4a --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I0923 10:53:01.101827   24995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d9ei0t.d7gczbf91ghyxy4a --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (21.040673445s)
	I0923 10:53:01.101877   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 10:53:01.765759   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780-m02 minikube.k8s.io/updated_at=2024_09_23T10_53_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=false
	I0923 10:53:01.907605   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-790780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 10:53:02.022219   24995 start.go:319] duration metric: took 22.112042939s to joinCluster
	I0923 10:53:02.022286   24995 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:02.022624   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:02.023699   24995 out.go:177] * Verifying Kubernetes components...
	I0923 10:53:02.024977   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:02.301994   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:53:02.355631   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:53:02.355833   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 10:53:02.355886   24995 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.234:8443
	I0923 10:53:02.356182   24995 node_ready.go:35] waiting up to 6m0s for node "ha-790780-m02" to be "Ready" ...
	I0923 10:53:02.356275   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:02.356282   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:02.356289   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:02.356293   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:02.365629   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:53:02.856673   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:02.856694   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:02.856703   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:02.856706   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:02.865889   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:53:03.356651   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:03.356671   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:03.356680   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:03.356687   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:03.363168   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:53:03.857045   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:03.857073   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:03.857084   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:03.857090   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:03.860890   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:04.356575   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:04.356597   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:04.356604   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:04.356608   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:04.359661   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:04.360223   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:04.856507   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:04.856529   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:04.856537   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:04.856540   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:04.860119   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:05.356700   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:05.356722   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:05.356728   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:05.356733   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:05.360476   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:05.856749   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:05.856773   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:05.856781   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:05.856784   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:05.860556   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:06.356805   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:06.356825   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:06.356833   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:06.356837   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:06.359991   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:06.361007   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:06.857386   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:06.857410   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:06.857422   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:06.857428   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:06.860894   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:07.357257   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:07.357281   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:07.357291   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:07.357296   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:07.361346   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:07.856430   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:07.856457   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:07.856468   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:07.856475   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:07.860130   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:08.357367   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:08.357402   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:08.357416   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:08.357422   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:08.360772   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:08.361285   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:08.856627   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:08.856648   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:08.856656   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:08.856661   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:08.860220   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:09.357037   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:09.357059   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:09.357070   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:09.357075   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:09.360298   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:09.857427   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:09.857457   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:09.857469   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:09.857474   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:09.860786   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:10.357151   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:10.357171   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:10.357180   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:10.357183   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:10.360916   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:10.362707   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:10.857145   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:10.857166   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:10.857174   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:10.857178   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:10.861809   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:11.356801   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:11.356822   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:11.356830   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:11.356834   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:11.360464   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:11.856414   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:11.856436   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:11.856447   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:11.856450   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:11.859649   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.357058   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:12.357081   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:12.357088   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:12.357092   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:12.361042   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.857390   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:12.857414   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:12.857424   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:12.857428   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:12.861016   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.861719   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:13.357113   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:13.357138   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:13.357150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:13.357155   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:13.360431   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:13.857223   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:13.857243   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:13.857251   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:13.857255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:13.860307   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:14.357308   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:14.357331   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:14.357339   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:14.357342   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:14.361127   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:14.856952   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:14.856977   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:14.856987   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:14.856992   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:14.860782   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:15.356456   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:15.356485   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:15.356496   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:15.356502   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:15.359792   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:15.360494   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:15.856872   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:15.856897   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:15.856907   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:15.856912   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:15.860634   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:16.356764   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:16.356786   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:16.356793   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:16.356798   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:16.360240   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:16.856427   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:16.856454   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:16.856466   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:16.856472   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:16.860397   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:17.356784   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:17.356806   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:17.356814   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:17.356819   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:17.360664   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:17.361536   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:17.856878   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:17.856902   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:17.856910   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:17.856915   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:17.860694   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:18.356716   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:18.356739   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:18.356746   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:18.356750   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:18.360583   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:18.856463   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:18.856487   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:18.856495   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:18.856502   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:18.860301   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:19.356990   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:19.357018   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:19.357028   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:19.357031   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:19.361547   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:19.362649   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:19.857046   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:19.857065   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:19.857073   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:19.857077   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:19.860596   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:20.357289   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:20.357312   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:20.357321   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:20.357326   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:20.361074   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:20.857154   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:20.857178   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:20.857186   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:20.857190   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:20.860563   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:21.357410   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.357434   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.357445   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.357449   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.362160   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:21.362767   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:21.857033   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.857057   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.857065   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.857071   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.860457   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:21.860908   24995 node_ready.go:49] node "ha-790780-m02" has status "Ready":"True"
	I0923 10:53:21.860928   24995 node_ready.go:38] duration metric: took 19.504727616s for node "ha-790780-m02" to be "Ready" ...
	I0923 10:53:21.860937   24995 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:53:21.861016   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:21.861026   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.861033   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.861037   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.865124   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:21.870946   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.871015   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bsbth
	I0923 10:53:21.871023   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.871030   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.871035   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.873727   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.874362   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.874375   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.874383   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.874386   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.876630   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.877063   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.877077   24995 pod_ready.go:82] duration metric: took 6.11171ms for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.877085   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.877131   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-vzhrs
	I0923 10:53:21.877139   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.877145   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.877148   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.879422   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.879947   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.879959   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.879966   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.879971   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.881756   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:53:21.882229   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.882243   24995 pod_ready.go:82] duration metric: took 5.151724ms for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.882250   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.882288   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780
	I0923 10:53:21.882295   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.882301   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.882305   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.884597   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.885566   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.885580   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.885587   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.885590   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.887691   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.888066   24995 pod_ready.go:93] pod "etcd-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.888081   24995 pod_ready.go:82] duration metric: took 5.825391ms for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.888088   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.888136   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m02
	I0923 10:53:21.888144   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.888150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.888154   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.890206   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.890675   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.890689   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.890699   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.890706   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.892638   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:53:21.892989   24995 pod_ready.go:93] pod "etcd-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.893005   24995 pod_ready.go:82] duration metric: took 4.911284ms for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.893019   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.057496   24995 request.go:632] Waited for 164.405368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:53:22.057558   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:53:22.057562   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.057569   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.057573   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.061586   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.257674   24995 request.go:632] Waited for 195.391664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:22.257753   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:22.257761   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.257768   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.257772   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.260869   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.261571   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:22.261592   24995 pod_ready.go:82] duration metric: took 368.566383ms for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.261602   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.457665   24995 request.go:632] Waited for 195.996413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:53:22.457743   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:53:22.457752   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.457762   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.457769   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.463274   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:53:22.657157   24995 request.go:632] Waited for 193.295869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:22.657236   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:22.657245   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.657255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.657261   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.661000   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.661818   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:22.661846   24995 pod_ready.go:82] duration metric: took 400.236588ms for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.661858   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.857792   24995 request.go:632] Waited for 195.86636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:53:22.857859   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:53:22.857865   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.857872   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.857878   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.861662   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.057689   24995 request.go:632] Waited for 195.383255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.057812   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.057824   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.057834   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.057838   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.061339   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.062080   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.062106   24995 pod_ready.go:82] duration metric: took 400.238848ms for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.062119   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.257074   24995 request.go:632] Waited for 194.846773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:53:23.257139   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:53:23.257144   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.257154   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.257159   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.261117   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.457215   24995 request.go:632] Waited for 195.281467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:23.457266   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:23.457271   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.457280   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.457285   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.460410   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.460927   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.460946   24995 pod_ready.go:82] duration metric: took 398.811897ms for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.460959   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.657058   24995 request.go:632] Waited for 196.030311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:53:23.657133   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:53:23.657142   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.657151   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.657160   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.660449   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.857439   24995 request.go:632] Waited for 196.364612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.857511   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.857517   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.857524   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.857528   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.861085   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.861628   24995 pod_ready.go:93] pod "kube-proxy-jqwtw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.861646   24995 pod_ready.go:82] duration metric: took 400.678998ms for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.861658   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.057696   24995 request.go:632] Waited for 195.97414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:53:24.057780   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:53:24.057788   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.057803   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.057811   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.061523   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.257819   24995 request.go:632] Waited for 195.359423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:24.257886   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:24.257891   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.257898   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.257903   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.260794   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:24.261474   24995 pod_ready.go:93] pod "kube-proxy-x8fb6" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:24.261495   24995 pod_ready.go:82] duration metric: took 399.829683ms for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.261504   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.457623   24995 request.go:632] Waited for 196.060511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:53:24.457720   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:53:24.457731   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.457743   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.457754   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.461018   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.657050   24995 request.go:632] Waited for 195.289482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:24.657104   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:24.657112   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.657119   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.657123   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.660508   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.661074   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:24.661111   24995 pod_ready.go:82] duration metric: took 399.600186ms for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.661130   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.857061   24995 request.go:632] Waited for 195.872756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:53:24.857130   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:53:24.857135   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.857142   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.857146   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.860206   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.057515   24995 request.go:632] Waited for 196.490026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:25.057567   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:25.057572   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.057579   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.057584   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.060963   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.061666   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:25.061685   24995 pod_ready.go:82] duration metric: took 400.549015ms for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:25.061695   24995 pod_ready.go:39] duration metric: took 3.200747429s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:53:25.061708   24995 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:53:25.061767   24995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:53:25.081513   24995 api_server.go:72] duration metric: took 23.059195196s to wait for apiserver process to appear ...
	I0923 10:53:25.081540   24995 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:53:25.081558   24995 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0923 10:53:25.085813   24995 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0923 10:53:25.085884   24995 round_trippers.go:463] GET https://192.168.39.234:8443/version
	I0923 10:53:25.085897   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.085907   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.085914   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.086702   24995 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 10:53:25.086786   24995 api_server.go:141] control plane version: v1.31.1
	I0923 10:53:25.086800   24995 api_server.go:131] duration metric: took 5.254846ms to wait for apiserver health ...
	I0923 10:53:25.086810   24995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:53:25.257145   24995 request.go:632] Waited for 170.272303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.257205   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.257212   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.257236   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.257246   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.262177   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.267069   24995 system_pods.go:59] 17 kube-system pods found
	I0923 10:53:25.267104   24995 system_pods.go:61] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:53:25.267110   24995 system_pods.go:61] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:53:25.267114   24995 system_pods.go:61] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:53:25.267119   24995 system_pods.go:61] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:53:25.267122   24995 system_pods.go:61] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:53:25.267125   24995 system_pods.go:61] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:53:25.267129   24995 system_pods.go:61] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:53:25.267132   24995 system_pods.go:61] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:53:25.267135   24995 system_pods.go:61] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:53:25.267139   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:53:25.267147   24995 system_pods.go:61] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:53:25.267153   24995 system_pods.go:61] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:53:25.267156   24995 system_pods.go:61] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:53:25.267159   24995 system_pods.go:61] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:53:25.267162   24995 system_pods.go:61] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:53:25.267165   24995 system_pods.go:61] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:53:25.267168   24995 system_pods.go:61] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:53:25.267174   24995 system_pods.go:74] duration metric: took 180.359181ms to wait for pod list to return data ...
	I0923 10:53:25.267183   24995 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:53:25.457458   24995 request.go:632] Waited for 190.183499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:53:25.457513   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:53:25.457518   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.457524   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.457529   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.461448   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.461660   24995 default_sa.go:45] found service account: "default"
	I0923 10:53:25.461673   24995 default_sa.go:55] duration metric: took 194.484894ms for default service account to be created ...
	I0923 10:53:25.461682   24995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:53:25.657106   24995 request.go:632] Waited for 195.349388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.657170   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.657177   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.657185   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.657189   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.661432   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.665847   24995 system_pods.go:86] 17 kube-system pods found
	I0923 10:53:25.665873   24995 system_pods.go:89] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:53:25.665880   24995 system_pods.go:89] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:53:25.665884   24995 system_pods.go:89] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:53:25.665888   24995 system_pods.go:89] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:53:25.665891   24995 system_pods.go:89] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:53:25.665895   24995 system_pods.go:89] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:53:25.665898   24995 system_pods.go:89] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:53:25.665902   24995 system_pods.go:89] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:53:25.665905   24995 system_pods.go:89] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:53:25.665909   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:53:25.665912   24995 system_pods.go:89] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:53:25.665915   24995 system_pods.go:89] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:53:25.665918   24995 system_pods.go:89] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:53:25.665922   24995 system_pods.go:89] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:53:25.665925   24995 system_pods.go:89] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:53:25.665928   24995 system_pods.go:89] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:53:25.665930   24995 system_pods.go:89] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:53:25.665936   24995 system_pods.go:126] duration metric: took 204.248587ms to wait for k8s-apps to be running ...
	I0923 10:53:25.665944   24995 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:53:25.665984   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:53:25.684789   24995 system_svc.go:56] duration metric: took 18.833844ms WaitForService to wait for kubelet
	I0923 10:53:25.684821   24995 kubeadm.go:582] duration metric: took 23.662507551s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:53:25.684838   24995 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:53:25.857256   24995 request.go:632] Waited for 172.290601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes
	I0923 10:53:25.857312   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes
	I0923 10:53:25.857319   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.857330   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.857337   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.861630   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.862368   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:53:25.862410   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:53:25.862427   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:53:25.862432   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:53:25.862438   24995 node_conditions.go:105] duration metric: took 177.594557ms to run NodePressure ...
	I0923 10:53:25.862459   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:53:25.862493   24995 start.go:255] writing updated cluster config ...
	I0923 10:53:25.865563   24995 out.go:201] 
	I0923 10:53:25.867057   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:25.867172   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:25.868777   24995 out.go:177] * Starting "ha-790780-m03" control-plane node in "ha-790780" cluster
	I0923 10:53:25.870020   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:53:25.870049   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:53:25.870173   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:53:25.870184   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:53:25.870283   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:25.870479   24995 start.go:360] acquireMachinesLock for ha-790780-m03: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:53:25.870521   24995 start.go:364] duration metric: took 24.387µs to acquireMachinesLock for "ha-790780-m03"
	I0923 10:53:25.870535   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:25.870632   24995 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0923 10:53:25.871978   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:53:25.872058   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:25.872097   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:25.887083   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0923 10:53:25.887502   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:25.887952   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:25.887969   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:25.888292   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:25.888496   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:25.888647   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:25.888772   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:53:25.888800   24995 client.go:168] LocalClient.Create starting
	I0923 10:53:25.888829   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:53:25.888863   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:53:25.888888   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:53:25.888936   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:53:25.888954   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:53:25.888964   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:53:25.888978   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:53:25.888986   24995 main.go:141] libmachine: (ha-790780-m03) Calling .PreCreateCheck
	I0923 10:53:25.889134   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:25.889504   24995 main.go:141] libmachine: Creating machine...
	I0923 10:53:25.889516   24995 main.go:141] libmachine: (ha-790780-m03) Calling .Create
	I0923 10:53:25.889669   24995 main.go:141] libmachine: (ha-790780-m03) Creating KVM machine...
	I0923 10:53:25.890855   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found existing default KVM network
	I0923 10:53:25.890969   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found existing private KVM network mk-ha-790780
	I0923 10:53:25.891095   24995 main.go:141] libmachine: (ha-790780-m03) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 ...
	I0923 10:53:25.891119   24995 main.go:141] libmachine: (ha-790780-m03) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:53:25.891198   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:25.891096   25778 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:53:25.891276   24995 main.go:141] libmachine: (ha-790780-m03) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:53:26.119663   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.119526   25778 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa...
	I0923 10:53:26.169862   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.169746   25778 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/ha-790780-m03.rawdisk...
	I0923 10:53:26.169897   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Writing magic tar header
	I0923 10:53:26.169907   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Writing SSH key tar header
	I0923 10:53:26.169915   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.169856   25778 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 ...
	I0923 10:53:26.169932   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03
	I0923 10:53:26.169988   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 (perms=drwx------)
	I0923 10:53:26.170004   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:53:26.170016   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:53:26.170030   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:53:26.170039   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:53:26.170046   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:53:26.170054   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:53:26.170064   24995 main.go:141] libmachine: (ha-790780-m03) Creating domain...
	I0923 10:53:26.170078   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:53:26.170094   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:53:26.170131   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:53:26.170142   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:53:26.170148   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home
	I0923 10:53:26.170153   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Skipping /home - not owner
	I0923 10:53:26.171065   24995 main.go:141] libmachine: (ha-790780-m03) define libvirt domain using xml: 
	I0923 10:53:26.171093   24995 main.go:141] libmachine: (ha-790780-m03) <domain type='kvm'>
	I0923 10:53:26.171101   24995 main.go:141] libmachine: (ha-790780-m03)   <name>ha-790780-m03</name>
	I0923 10:53:26.171112   24995 main.go:141] libmachine: (ha-790780-m03)   <memory unit='MiB'>2200</memory>
	I0923 10:53:26.171120   24995 main.go:141] libmachine: (ha-790780-m03)   <vcpu>2</vcpu>
	I0923 10:53:26.171126   24995 main.go:141] libmachine: (ha-790780-m03)   <features>
	I0923 10:53:26.171134   24995 main.go:141] libmachine: (ha-790780-m03)     <acpi/>
	I0923 10:53:26.171144   24995 main.go:141] libmachine: (ha-790780-m03)     <apic/>
	I0923 10:53:26.171152   24995 main.go:141] libmachine: (ha-790780-m03)     <pae/>
	I0923 10:53:26.171161   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171166   24995 main.go:141] libmachine: (ha-790780-m03)   </features>
	I0923 10:53:26.171171   24995 main.go:141] libmachine: (ha-790780-m03)   <cpu mode='host-passthrough'>
	I0923 10:53:26.171175   24995 main.go:141] libmachine: (ha-790780-m03)   
	I0923 10:53:26.171184   24995 main.go:141] libmachine: (ha-790780-m03)   </cpu>
	I0923 10:53:26.171200   24995 main.go:141] libmachine: (ha-790780-m03)   <os>
	I0923 10:53:26.171209   24995 main.go:141] libmachine: (ha-790780-m03)     <type>hvm</type>
	I0923 10:53:26.171218   24995 main.go:141] libmachine: (ha-790780-m03)     <boot dev='cdrom'/>
	I0923 10:53:26.171235   24995 main.go:141] libmachine: (ha-790780-m03)     <boot dev='hd'/>
	I0923 10:53:26.171247   24995 main.go:141] libmachine: (ha-790780-m03)     <bootmenu enable='no'/>
	I0923 10:53:26.171256   24995 main.go:141] libmachine: (ha-790780-m03)   </os>
	I0923 10:53:26.171264   24995 main.go:141] libmachine: (ha-790780-m03)   <devices>
	I0923 10:53:26.171272   24995 main.go:141] libmachine: (ha-790780-m03)     <disk type='file' device='cdrom'>
	I0923 10:53:26.171284   24995 main.go:141] libmachine: (ha-790780-m03)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/boot2docker.iso'/>
	I0923 10:53:26.171294   24995 main.go:141] libmachine: (ha-790780-m03)       <target dev='hdc' bus='scsi'/>
	I0923 10:53:26.171302   24995 main.go:141] libmachine: (ha-790780-m03)       <readonly/>
	I0923 10:53:26.171311   24995 main.go:141] libmachine: (ha-790780-m03)     </disk>
	I0923 10:53:26.171321   24995 main.go:141] libmachine: (ha-790780-m03)     <disk type='file' device='disk'>
	I0923 10:53:26.171336   24995 main.go:141] libmachine: (ha-790780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:53:26.171351   24995 main.go:141] libmachine: (ha-790780-m03)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/ha-790780-m03.rawdisk'/>
	I0923 10:53:26.171361   24995 main.go:141] libmachine: (ha-790780-m03)       <target dev='hda' bus='virtio'/>
	I0923 10:53:26.171367   24995 main.go:141] libmachine: (ha-790780-m03)     </disk>
	I0923 10:53:26.171378   24995 main.go:141] libmachine: (ha-790780-m03)     <interface type='network'>
	I0923 10:53:26.171390   24995 main.go:141] libmachine: (ha-790780-m03)       <source network='mk-ha-790780'/>
	I0923 10:53:26.171401   24995 main.go:141] libmachine: (ha-790780-m03)       <model type='virtio'/>
	I0923 10:53:26.171412   24995 main.go:141] libmachine: (ha-790780-m03)     </interface>
	I0923 10:53:26.171422   24995 main.go:141] libmachine: (ha-790780-m03)     <interface type='network'>
	I0923 10:53:26.171430   24995 main.go:141] libmachine: (ha-790780-m03)       <source network='default'/>
	I0923 10:53:26.171439   24995 main.go:141] libmachine: (ha-790780-m03)       <model type='virtio'/>
	I0923 10:53:26.171447   24995 main.go:141] libmachine: (ha-790780-m03)     </interface>
	I0923 10:53:26.171455   24995 main.go:141] libmachine: (ha-790780-m03)     <serial type='pty'>
	I0923 10:53:26.171462   24995 main.go:141] libmachine: (ha-790780-m03)       <target port='0'/>
	I0923 10:53:26.171471   24995 main.go:141] libmachine: (ha-790780-m03)     </serial>
	I0923 10:53:26.171479   24995 main.go:141] libmachine: (ha-790780-m03)     <console type='pty'>
	I0923 10:53:26.171490   24995 main.go:141] libmachine: (ha-790780-m03)       <target type='serial' port='0'/>
	I0923 10:53:26.171499   24995 main.go:141] libmachine: (ha-790780-m03)     </console>
	I0923 10:53:26.171508   24995 main.go:141] libmachine: (ha-790780-m03)     <rng model='virtio'>
	I0923 10:53:26.171518   24995 main.go:141] libmachine: (ha-790780-m03)       <backend model='random'>/dev/random</backend>
	I0923 10:53:26.171530   24995 main.go:141] libmachine: (ha-790780-m03)     </rng>
	I0923 10:53:26.171537   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171544   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171555   24995 main.go:141] libmachine: (ha-790780-m03)   </devices>
	I0923 10:53:26.171565   24995 main.go:141] libmachine: (ha-790780-m03) </domain>
	I0923 10:53:26.171575   24995 main.go:141] libmachine: (ha-790780-m03) 
	I0923 10:53:26.178380   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:72:76:7a in network default
	I0923 10:53:26.178970   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring networks are active...
	I0923 10:53:26.178994   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:26.179728   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring network default is active
	I0923 10:53:26.180047   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring network mk-ha-790780 is active
	I0923 10:53:26.180480   24995 main.go:141] libmachine: (ha-790780-m03) Getting domain xml...
	I0923 10:53:26.181303   24995 main.go:141] libmachine: (ha-790780-m03) Creating domain...
	I0923 10:53:27.415592   24995 main.go:141] libmachine: (ha-790780-m03) Waiting to get IP...
	I0923 10:53:27.416244   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:27.416680   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:27.416705   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:27.416654   25778 retry.go:31] will retry after 301.241192ms: waiting for machine to come up
	I0923 10:53:27.719304   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:27.719799   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:27.719822   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:27.719765   25778 retry.go:31] will retry after 352.048049ms: waiting for machine to come up
	I0923 10:53:28.073266   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.073729   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.073755   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.073678   25778 retry.go:31] will retry after 446.737236ms: waiting for machine to come up
	I0923 10:53:28.522311   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.522758   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.522785   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.522723   25778 retry.go:31] will retry after 430.883485ms: waiting for machine to come up
	I0923 10:53:28.955161   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.955610   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.955632   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.955571   25778 retry.go:31] will retry after 596.158416ms: waiting for machine to come up
	I0923 10:53:29.553342   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:29.553790   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:29.553817   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:29.553738   25778 retry.go:31] will retry after 730.070516ms: waiting for machine to come up
	I0923 10:53:30.285659   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:30.286131   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:30.286157   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:30.286040   25778 retry.go:31] will retry after 880.584916ms: waiting for machine to come up
	I0923 10:53:31.168589   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:31.169030   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:31.169056   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:31.168976   25778 retry.go:31] will retry after 1.090798092s: waiting for machine to come up
	I0923 10:53:32.261334   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:32.261824   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:32.261851   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:32.261785   25778 retry.go:31] will retry after 1.772470281s: waiting for machine to come up
	I0923 10:53:34.036802   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:34.037280   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:34.037304   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:34.037244   25778 retry.go:31] will retry after 2.114432637s: waiting for machine to come up
	I0923 10:53:36.153777   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:36.154260   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:36.154287   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:36.154219   25778 retry.go:31] will retry after 2.408325817s: waiting for machine to come up
	I0923 10:53:38.564571   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:38.565093   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:38.565130   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:38.565046   25778 retry.go:31] will retry after 2.326260729s: waiting for machine to come up
	I0923 10:53:40.892782   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:40.893136   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:40.893165   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:40.893117   25778 retry.go:31] will retry after 4.498444105s: waiting for machine to come up
	I0923 10:53:45.396707   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:45.397269   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:45.397291   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:45.397229   25778 retry.go:31] will retry after 3.781853522s: waiting for machine to come up
	I0923 10:53:49.183061   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.183495   24995 main.go:141] libmachine: (ha-790780-m03) Found IP for machine: 192.168.39.128
	I0923 10:53:49.183516   24995 main.go:141] libmachine: (ha-790780-m03) Reserving static IP address...
	I0923 10:53:49.183525   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has current primary IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.183927   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find host DHCP lease matching {name: "ha-790780-m03", mac: "52:54:00:da:88:d2", ip: "192.168.39.128"} in network mk-ha-790780
	I0923 10:53:49.254082   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Getting to WaitForSSH function...
	I0923 10:53:49.254113   24995 main.go:141] libmachine: (ha-790780-m03) Reserved static IP address: 192.168.39.128
	I0923 10:53:49.254149   24995 main.go:141] libmachine: (ha-790780-m03) Waiting for SSH to be available...
	I0923 10:53:49.256671   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.257072   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.257129   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.257268   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using SSH client type: external
	I0923 10:53:49.257291   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa (-rw-------)
	I0923 10:53:49.257308   24995 main.go:141] libmachine: (ha-790780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:53:49.257317   24995 main.go:141] libmachine: (ha-790780-m03) DBG | About to run SSH command:
	I0923 10:53:49.257331   24995 main.go:141] libmachine: (ha-790780-m03) DBG | exit 0
	I0923 10:53:49.381472   24995 main.go:141] libmachine: (ha-790780-m03) DBG | SSH cmd err, output: <nil>: 
	I0923 10:53:49.381777   24995 main.go:141] libmachine: (ha-790780-m03) KVM machine creation complete!
	I0923 10:53:49.382107   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:49.382695   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:49.382878   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:49.383011   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:53:49.383024   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetState
	I0923 10:53:49.384376   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:53:49.384391   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:53:49.384397   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:53:49.384405   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.386759   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.387147   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.387171   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.387306   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.387467   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.387589   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.387701   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.387847   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.388073   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.388086   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:53:49.488864   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:53:49.488884   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:53:49.488892   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.491596   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.491978   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.492008   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.492099   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.492277   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.492427   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.492526   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.492704   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.492876   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.492888   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:53:49.598720   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:53:49.598811   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:53:49.599353   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:53:49.599372   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.599616   24995 buildroot.go:166] provisioning hostname "ha-790780-m03"
	I0923 10:53:49.599639   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.599803   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.602122   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.602493   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.602532   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.602649   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.602826   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.602949   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.603164   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.603352   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.603516   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.603528   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780-m03 && echo "ha-790780-m03" | sudo tee /etc/hostname
	I0923 10:53:49.721012   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780-m03
	
	I0923 10:53:49.721052   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.723652   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.723993   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.724019   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.724168   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.724322   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.724468   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.724607   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.724760   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.724931   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.724946   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:53:49.840094   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:53:49.840118   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:53:49.840133   24995 buildroot.go:174] setting up certificates
	I0923 10:53:49.840143   24995 provision.go:84] configureAuth start
	I0923 10:53:49.840153   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.840425   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:49.842798   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.843203   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.843398   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.843425   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.846675   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.846978   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.847001   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.847165   24995 provision.go:143] copyHostCerts
	I0923 10:53:49.847199   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:53:49.847229   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:53:49.847237   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:53:49.847304   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:53:49.847373   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:53:49.847390   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:53:49.847395   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:53:49.847418   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:53:49.847462   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:53:49.847478   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:53:49.847484   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:53:49.847505   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:53:49.847551   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780-m03 san=[127.0.0.1 192.168.39.128 ha-790780-m03 localhost minikube]
	I0923 10:53:50.272155   24995 provision.go:177] copyRemoteCerts
	I0923 10:53:50.272213   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:53:50.272235   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.275051   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.275585   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.275610   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.275867   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.276099   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.276265   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.276390   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.359884   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:53:50.359964   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:53:50.385147   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:53:50.385241   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:53:50.408651   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:53:50.408716   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 10:53:50.435874   24995 provision.go:87] duration metric: took 595.718111ms to configureAuth
	I0923 10:53:50.435900   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:53:50.436094   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:50.436172   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.438683   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.439106   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.439127   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.439321   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.439488   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.439634   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.439746   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.439894   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:50.440051   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:50.440064   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:53:50.684672   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:53:50.684697   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:53:50.684703   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetURL
	I0923 10:53:50.686020   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using libvirt version 6000000
	I0923 10:53:50.688488   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.688853   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.688879   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.689108   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:53:50.689121   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:53:50.689127   24995 client.go:171] duration metric: took 24.800318648s to LocalClient.Create
	I0923 10:53:50.689151   24995 start.go:167] duration metric: took 24.800381017s to libmachine.API.Create "ha-790780"
	I0923 10:53:50.689159   24995 start.go:293] postStartSetup for "ha-790780-m03" (driver="kvm2")
	I0923 10:53:50.689169   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:53:50.689184   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.689440   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:53:50.689461   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.691514   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.691815   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.691839   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.692003   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.692169   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.692285   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.692465   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.777980   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:53:50.782722   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:53:50.782745   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:53:50.782841   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:53:50.782921   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:53:50.782934   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:53:50.783049   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:53:50.794032   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:53:50.818235   24995 start.go:296] duration metric: took 129.060416ms for postStartSetup
	I0923 10:53:50.818300   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:50.818861   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:50.821701   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.822078   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.822100   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.822411   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:50.822611   24995 start.go:128] duration metric: took 24.951969783s to createHost
	I0923 10:53:50.822632   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.824818   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.825087   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.825104   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.825227   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.825431   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.825587   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.825708   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.825886   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:50.826038   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:50.826050   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:53:50.930070   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088830.907721483
	
	I0923 10:53:50.930099   24995 fix.go:216] guest clock: 1727088830.907721483
	I0923 10:53:50.930110   24995 fix.go:229] Guest: 2024-09-23 10:53:50.907721483 +0000 UTC Remote: 2024-09-23 10:53:50.822622208 +0000 UTC m=+146.966414831 (delta=85.099275ms)
	I0923 10:53:50.930129   24995 fix.go:200] guest clock delta is within tolerance: 85.099275ms
	I0923 10:53:50.930136   24995 start.go:83] releasing machines lock for "ha-790780-m03", held for 25.059606586s
	I0923 10:53:50.930159   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.930413   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:50.933262   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.933632   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.933662   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.936077   24995 out.go:177] * Found network options:
	I0923 10:53:50.937456   24995 out.go:177]   - NO_PROXY=192.168.39.234,192.168.39.43
	W0923 10:53:50.938766   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 10:53:50.938786   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:53:50.938798   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939303   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939487   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939579   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:53:50.939619   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	W0923 10:53:50.939635   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 10:53:50.939651   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:53:50.939713   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:53:50.939736   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.942522   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.942765   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.942929   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.942950   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.943114   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.943237   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.943278   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.943281   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.943465   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.943491   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.943650   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.943653   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.944011   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.944170   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:51.179564   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:53:51.186418   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:53:51.186493   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:53:51.205433   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:53:51.205455   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:53:51.205519   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:53:51.225654   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:53:51.240061   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:53:51.240122   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:53:51.255040   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:53:51.270087   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:53:51.386340   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:53:51.551856   24995 docker.go:233] disabling docker service ...
	I0923 10:53:51.551936   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:53:51.566431   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:53:51.579646   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:53:51.704084   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:53:51.818925   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:53:51.833174   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:53:51.851230   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:53:51.851304   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.862780   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:53:51.862838   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.874053   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.884749   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.895370   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:53:51.906992   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.919902   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.938806   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.950285   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:53:51.960703   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:53:51.960774   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:53:51.975701   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:53:51.986268   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:52.107292   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:53:52.198777   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:53:52.198848   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:53:52.204135   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:53:52.204184   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:53:52.208403   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:53:52.251505   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:53:52.251599   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:53:52.282350   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:53:52.311799   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:53:52.313353   24995 out.go:177]   - env NO_PROXY=192.168.39.234
	I0923 10:53:52.314907   24995 out.go:177]   - env NO_PROXY=192.168.39.234,192.168.39.43
	I0923 10:53:52.316435   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:52.319158   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:52.319626   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:52.319654   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:52.319874   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:53:52.324605   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:53:52.339255   24995 mustload.go:65] Loading cluster: ha-790780
	I0923 10:53:52.339529   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:52.339777   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:52.339813   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:52.354195   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0923 10:53:52.354688   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:52.355182   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:52.355203   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:52.355538   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:52.355708   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:53:52.357205   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:53:52.357505   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:52.357542   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:52.372762   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0923 10:53:52.373235   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:52.373697   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:52.373716   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:52.374015   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:52.374212   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:53:52.374340   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.128
	I0923 10:53:52.374351   24995 certs.go:194] generating shared ca certs ...
	I0923 10:53:52.374369   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.374504   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:53:52.374556   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:53:52.374570   24995 certs.go:256] generating profile certs ...
	I0923 10:53:52.374655   24995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:53:52.374693   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6
	I0923 10:53:52.374713   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.43 192.168.39.128 192.168.39.254]
	I0923 10:53:52.830596   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 ...
	I0923 10:53:52.830630   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6: {Name:mk3da13c3de64b9df293631e361b2c7f1e18faef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.830809   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6 ...
	I0923 10:53:52.830824   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6: {Name:mk9b5e211aee3a00b4a3121b2b594883d08d2d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.830919   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:53:52.831074   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:53:52.831254   24995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:53:52.831273   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:53:52.831292   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:53:52.831307   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:53:52.831326   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:53:52.831343   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:53:52.831361   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:53:52.831377   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:53:52.845466   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:53:52.845553   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:53:52.845615   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:53:52.845628   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:53:52.845681   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:53:52.845720   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:53:52.845752   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:53:52.845808   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:53:52.845849   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:53:52.845870   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:52.845888   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:53:52.845975   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:53:52.849292   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:52.849803   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:53:52.849833   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:52.849989   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:53:52.850212   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:53:52.850363   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:53:52.850493   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:53:52.925695   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 10:53:52.931543   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 10:53:52.942513   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 10:53:52.947104   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 10:53:52.958388   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 10:53:52.963161   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 10:53:52.974344   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 10:53:52.978586   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 10:53:52.989199   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 10:53:52.993359   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 10:53:53.004532   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 10:53:53.009112   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0923 10:53:53.022998   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:53:53.048580   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:53:53.074022   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:53:53.099377   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:53:53.125775   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0923 10:53:53.149277   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:53:53.173416   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:53:53.196002   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:53:53.219585   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:53:53.244005   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:53:53.269483   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:53:53.294869   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 10:53:53.313037   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 10:53:53.331540   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 10:53:53.349167   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 10:53:53.365721   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 10:53:53.382590   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0923 10:53:53.399048   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 10:53:53.415691   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:53:53.421883   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:53:53.432913   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.437536   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.437594   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.443568   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:53:53.454559   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:53:53.466110   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.471977   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.472046   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.478758   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:53:53.490184   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:53:53.500924   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.505855   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.505903   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.511671   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:53:53.523484   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:53:53.527585   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:53:53.527642   24995 kubeadm.go:934] updating node {m03 192.168.39.128 8443 v1.31.1 crio true true} ...
	I0923 10:53:53.527721   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:53:53.527745   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:53:53.527775   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:53:53.547465   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:53:53.547540   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:53:53.547608   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:53:53.560380   24995 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:53:53.560453   24995 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:53:53.573111   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 10:53:53.573138   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 10:53:53.573159   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:53:53.573166   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:53:53.573188   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:53:53.573217   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:53:53.573226   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:53:53.573267   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:53:53.590633   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 10:53:53.590666   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:53:53.590676   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:53:53.590699   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 10:53:53.590727   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:53:53.590760   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:53:53.604722   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 10:53:53.604761   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:53:54.451748   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 10:53:54.462513   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 10:53:54.481654   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:53:54.498291   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 10:53:54.514964   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:53:54.519190   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:53:54.531635   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:54.654563   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:53:54.675941   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:53:54.676279   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:54.676323   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:54.693004   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I0923 10:53:54.693496   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:54.693939   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:54.693961   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:54.694293   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:54.694479   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:53:54.694626   24995 start.go:317] joinCluster: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:53:54.694743   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 10:53:54.694765   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:53:54.697460   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:54.697884   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:53:54.697912   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:54.698049   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:53:54.698201   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:53:54.698349   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:53:54.698455   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:53:54.854997   24995 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:54.855050   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hoy5xs.p8rtt9vlcudv8w5v --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m03 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443"
	I0923 10:54:17.634590   24995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hoy5xs.p8rtt9vlcudv8w5v --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m03 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443": (22.77951683s)
	I0923 10:54:17.634630   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 10:54:18.244633   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780-m03 minikube.k8s.io/updated_at=2024_09_23T10_54_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=false
	I0923 10:54:18.356200   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-790780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 10:54:18.464003   24995 start.go:319] duration metric: took 23.769370572s to joinCluster
	I0923 10:54:18.464065   24995 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:54:18.464405   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:54:18.465913   24995 out.go:177] * Verifying Kubernetes components...
	I0923 10:54:18.467412   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:54:18.756406   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:54:18.802392   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:54:18.802611   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 10:54:18.802663   24995 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.234:8443
	I0923 10:54:18.802852   24995 node_ready.go:35] waiting up to 6m0s for node "ha-790780-m03" to be "Ready" ...
	I0923 10:54:18.802919   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:18.802926   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:18.802933   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:18.802938   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:18.806473   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:19.303251   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:19.303278   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:19.303289   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:19.303297   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:19.306929   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:19.803053   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:19.803079   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:19.803087   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:19.803099   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:19.806552   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:20.303861   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:20.303887   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:20.303897   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:20.303903   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:20.307405   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:20.803113   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:20.803146   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:20.803154   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:20.803159   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:20.806146   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:20.806645   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:21.303931   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:21.303977   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:21.303989   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:21.303995   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:21.308047   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:21.803958   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:21.803978   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:21.803985   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:21.803991   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:21.807634   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:22.303112   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:22.303136   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:22.303146   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:22.303152   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:22.307111   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:22.803868   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:22.803900   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:22.803912   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:22.803918   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:22.809179   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:22.809796   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:23.303023   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:23.303042   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:23.303050   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:23.303054   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:23.306668   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:23.803788   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:23.803812   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:23.803824   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:23.803830   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:23.807293   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:24.303271   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:24.303300   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:24.303312   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:24.303319   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:24.306672   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:24.804050   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:24.804069   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:24.804078   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:24.804081   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:24.807683   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:25.303840   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:25.303859   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:25.303867   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:25.303871   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:25.306860   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:25.307495   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:25.803972   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:25.804004   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:25.804015   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:25.804020   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:25.809010   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:26.303324   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:26.303361   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:26.303373   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:26.303381   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:26.307038   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:26.803707   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:26.803726   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:26.803735   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:26.803740   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:26.807424   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:27.303612   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:27.303633   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:27.303641   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:27.303644   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:27.307111   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:27.307894   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:27.803014   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:27.803035   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:27.803042   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:27.803047   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:27.806595   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:28.303068   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:28.303091   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:28.303099   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:28.303103   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:28.306712   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:28.803340   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:28.803367   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:28.803378   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:28.803383   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:28.808838   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:29.303295   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:29.303316   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:29.303329   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:29.303334   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:29.306632   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:29.803768   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:29.803791   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:29.803799   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:29.803805   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:29.807177   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:29.807790   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:30.303713   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:30.303735   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:30.303747   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:30.303752   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:30.307209   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:30.803111   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:30.803133   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:30.803141   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:30.803149   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:30.806613   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:31.303325   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:31.303352   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:31.303371   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:31.303378   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:31.307177   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:31.803015   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:31.803038   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:31.803048   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:31.803056   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:31.806715   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:32.304018   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:32.304043   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:32.304053   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:32.304060   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:32.307932   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:32.308669   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:32.803891   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:32.803917   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:32.803926   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:32.803930   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:32.807307   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:33.303944   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:33.303964   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:33.303971   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:33.303975   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:33.307665   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:33.803624   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:33.803651   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:33.803662   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:33.803667   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:33.807257   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.303218   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:34.303244   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:34.303254   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:34.303260   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:34.306866   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.803306   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:34.803327   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:34.803334   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:34.803339   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:34.807098   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.807707   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:35.303220   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:35.303244   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:35.303255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:35.303261   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:35.306357   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:35.803279   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:35.803300   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:35.803308   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:35.803311   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:35.806322   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:36.303406   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:36.303426   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:36.303434   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:36.303437   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:36.307051   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:36.804001   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:36.804025   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:36.804032   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:36.804037   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:36.807873   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:36.808340   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:37.304023   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:37.304056   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.304068   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.304074   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.307139   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:37.803018   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:37.803040   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.803049   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.803053   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.806605   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:37.807211   24995 node_ready.go:49] node "ha-790780-m03" has status "Ready":"True"
	I0923 10:54:37.807228   24995 node_ready.go:38] duration metric: took 19.004361031s for node "ha-790780-m03" to be "Ready" ...
	I0923 10:54:37.807235   24995 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:54:37.807290   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:37.807299   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.807306   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.807314   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.813087   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:37.819930   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.820001   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bsbth
	I0923 10:54:37.820010   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.820017   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.820021   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.822941   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.823534   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.823553   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.823564   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.823569   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.826001   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.826517   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.826537   24995 pod_ready.go:82] duration metric: took 6.583104ms for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.826548   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.826607   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-vzhrs
	I0923 10:54:37.826617   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.826627   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.826638   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.829279   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.829843   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.829861   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.829871   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.829876   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.832424   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.832919   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.832933   24995 pod_ready.go:82] duration metric: took 6.374276ms for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.832941   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.832999   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780
	I0923 10:54:37.833006   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.833012   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.833019   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.835776   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.836388   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.836406   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.836415   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.836421   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.838742   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.839384   24995 pod_ready.go:93] pod "etcd-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.839400   24995 pod_ready.go:82] duration metric: took 6.450727ms for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.839411   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.839464   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m02
	I0923 10:54:37.839474   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.839484   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.839492   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.841917   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.842434   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:37.842448   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.842457   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.842463   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.844487   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.844973   24995 pod_ready.go:93] pod "etcd-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.844988   24995 pod_ready.go:82] duration metric: took 5.569102ms for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.844998   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.003469   24995 request.go:632] Waited for 158.377606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m03
	I0923 10:54:38.003538   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m03
	I0923 10:54:38.003546   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.003556   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.003563   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.007272   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.203213   24995 request.go:632] Waited for 195.30349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:38.203263   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:38.203268   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.203276   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.203283   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.206660   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.207358   24995 pod_ready.go:93] pod "etcd-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:38.207377   24995 pod_ready.go:82] duration metric: took 362.371698ms for pod "etcd-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.207393   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.403519   24995 request.go:632] Waited for 196.060085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:54:38.403591   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:54:38.403596   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.403604   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.403609   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.407248   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.603071   24995 request.go:632] Waited for 195.28673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:38.603162   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:38.603171   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.603185   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.603191   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.606368   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.606871   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:38.606889   24995 pod_ready.go:82] duration metric: took 399.489169ms for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.606901   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.803863   24995 request.go:632] Waited for 196.897276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:54:38.803951   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:54:38.803957   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.803965   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.803970   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.807324   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.003391   24995 request.go:632] Waited for 195.083674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:39.003447   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:39.003452   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.003459   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.003463   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.007170   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.007621   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.007637   24995 pod_ready.go:82] duration metric: took 400.728218ms for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.007646   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.203104   24995 request.go:632] Waited for 195.376867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m03
	I0923 10:54:39.203174   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m03
	I0923 10:54:39.203180   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.203191   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.203199   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.207195   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.403428   24995 request.go:632] Waited for 195.367448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:39.403481   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:39.403497   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.403514   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.403518   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.407467   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.408031   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.408055   24995 pod_ready.go:82] duration metric: took 400.401034ms for pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.408068   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.604073   24995 request.go:632] Waited for 195.932476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:54:39.604147   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:54:39.604155   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.604162   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.604171   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.607668   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.803638   24995 request.go:632] Waited for 195.213228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:39.803724   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:39.803735   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.803743   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.803746   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.807615   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.808349   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.808366   24995 pod_ready.go:82] duration metric: took 400.287089ms for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.808375   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.003824   24995 request.go:632] Waited for 195.387565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:54:40.003877   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:54:40.003882   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.003889   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.003899   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.007398   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.203651   24995 request.go:632] Waited for 195.36679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:40.203720   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:40.203725   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.203732   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.203735   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.207328   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.208124   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:40.208142   24995 pod_ready.go:82] duration metric: took 399.761139ms for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.208155   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.403086   24995 request.go:632] Waited for 194.869554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m03
	I0923 10:54:40.403150   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m03
	I0923 10:54:40.403167   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.403177   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.403187   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.407112   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.603302   24995 request.go:632] Waited for 195.339611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:40.603351   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:40.603356   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.603364   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.603368   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.606880   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.607541   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:40.607563   24995 pod_ready.go:82] duration metric: took 399.39886ms for pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.607574   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.803473   24995 request.go:632] Waited for 195.828576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:54:40.803528   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:54:40.803533   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.803540   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.803544   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.807602   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:41.003253   24995 request.go:632] Waited for 194.249655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:41.003339   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:41.003350   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.003359   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.003365   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.006586   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.007310   24995 pod_ready.go:93] pod "kube-proxy-jqwtw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.007329   24995 pod_ready.go:82] duration metric: took 399.74892ms for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.007339   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rqjzc" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.203496   24995 request.go:632] Waited for 196.092833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rqjzc
	I0923 10:54:41.203562   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rqjzc
	I0923 10:54:41.203567   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.203575   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.203578   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.207204   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.403851   24995 request.go:632] Waited for 195.767978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:41.403907   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:41.403914   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.403924   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.403934   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.407303   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.407822   24995 pod_ready.go:93] pod "kube-proxy-rqjzc" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.407837   24995 pod_ready.go:82] duration metric: took 400.492538ms for pod "kube-proxy-rqjzc" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.407846   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.604077   24995 request.go:632] Waited for 196.149981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:54:41.604138   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:54:41.604148   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.604169   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.604174   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.607470   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.803470   24995 request.go:632] Waited for 195.363139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:41.803568   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:41.803577   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.803599   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.803607   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.806928   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.807802   24995 pod_ready.go:93] pod "kube-proxy-x8fb6" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.807821   24995 pod_ready.go:82] duration metric: took 399.96783ms for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.807833   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.004033   24995 request.go:632] Waited for 196.111135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:54:42.004102   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:54:42.004132   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.004143   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.004163   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.007471   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.203462   24995 request.go:632] Waited for 195.3653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:42.203523   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:42.203530   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.203539   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.203542   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.207322   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.207956   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:42.207977   24995 pod_ready.go:82] duration metric: took 400.13764ms for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.207986   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.403868   24995 request.go:632] Waited for 195.812102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:54:42.403956   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:54:42.403968   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.403980   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.403990   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.407964   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.603132   24995 request.go:632] Waited for 194.291839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:42.603204   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:42.603209   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.603219   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.603225   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.607412   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:42.607957   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:42.607976   24995 pod_ready.go:82] duration metric: took 399.981007ms for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.607988   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.804082   24995 request.go:632] Waited for 196.014482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m03
	I0923 10:54:42.804138   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m03
	I0923 10:54:42.804143   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.804150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.804155   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.807740   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:43.003755   24995 request.go:632] Waited for 195.347939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:43.003855   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:43.003875   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.003887   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.003896   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.007973   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:43.009036   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:43.009058   24995 pod_ready.go:82] duration metric: took 401.061758ms for pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:43.009074   24995 pod_ready.go:39] duration metric: took 5.201827787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:54:43.009091   24995 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:54:43.009170   24995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:54:43.027664   24995 api_server.go:72] duration metric: took 24.563557521s to wait for apiserver process to appear ...
	I0923 10:54:43.027697   24995 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:54:43.027721   24995 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0923 10:54:43.032140   24995 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0923 10:54:43.032214   24995 round_trippers.go:463] GET https://192.168.39.234:8443/version
	I0923 10:54:43.032220   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.032231   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.032238   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.033668   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:54:43.033783   24995 api_server.go:141] control plane version: v1.31.1
	I0923 10:54:43.033805   24995 api_server.go:131] duration metric: took 6.10028ms to wait for apiserver health ...
	I0923 10:54:43.033815   24995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:54:43.204056   24995 request.go:632] Waited for 170.168573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.204125   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.204130   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.204140   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.204147   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.210512   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:54:43.216975   24995 system_pods.go:59] 24 kube-system pods found
	I0923 10:54:43.217008   24995 system_pods.go:61] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:54:43.217015   24995 system_pods.go:61] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:54:43.217020   24995 system_pods.go:61] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:54:43.217025   24995 system_pods.go:61] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:54:43.217030   24995 system_pods.go:61] "etcd-ha-790780-m03" [a8ba763b-e2c8-476f-b55d-3801a6ebfddc] Running
	I0923 10:54:43.217035   24995 system_pods.go:61] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:54:43.217039   24995 system_pods.go:61] "kindnet-lzbx6" [8323d5a3-9987-4d80-a510-9a5631283d3b] Running
	I0923 10:54:43.217046   24995 system_pods.go:61] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:54:43.217052   24995 system_pods.go:61] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:54:43.217060   24995 system_pods.go:61] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:54:43.217065   24995 system_pods.go:61] "kube-apiserver-ha-790780-m03" [3d5a7d3c-744c-4ada-90f3-6273d634bb4b] Running
	I0923 10:54:43.217073   24995 system_pods.go:61] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:54:43.217078   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:54:43.217086   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m03" [b317c61a-e51d-4a01-8591-7d447395bcb5] Running
	I0923 10:54:43.217094   24995 system_pods.go:61] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:54:43.217099   24995 system_pods.go:61] "kube-proxy-rqjzc" [ea0b4964-a74f-43f0-aebf-533661bc9537] Running
	I0923 10:54:43.217104   24995 system_pods.go:61] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:54:43.217109   24995 system_pods.go:61] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:54:43.217113   24995 system_pods.go:61] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:54:43.217118   24995 system_pods.go:61] "kube-scheduler-ha-790780-m03" [1c21e524-7e5a-4c74-97e6-04dd8d61ecbb] Running
	I0923 10:54:43.217124   24995 system_pods.go:61] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:54:43.217129   24995 system_pods.go:61] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:54:43.217137   24995 system_pods.go:61] "kube-vip-ha-790780-m03" [4336e409-5c78-4af0-8575-fe659435909a] Running
	I0923 10:54:43.217141   24995 system_pods.go:61] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:54:43.217150   24995 system_pods.go:74] duration metric: took 183.325652ms to wait for pod list to return data ...
	I0923 10:54:43.217162   24995 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:54:43.403603   24995 request.go:632] Waited for 186.357604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:54:43.403650   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:54:43.403671   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.403685   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.403692   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.408142   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:43.408270   24995 default_sa.go:45] found service account: "default"
	I0923 10:54:43.408289   24995 default_sa.go:55] duration metric: took 191.114244ms for default service account to be created ...
	I0923 10:54:43.408302   24995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:54:43.603624   24995 request.go:632] Waited for 195.240427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.603680   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.603685   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.603692   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.603698   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.609933   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:54:43.617043   24995 system_pods.go:86] 24 kube-system pods found
	I0923 10:54:43.617075   24995 system_pods.go:89] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:54:43.617081   24995 system_pods.go:89] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:54:43.617085   24995 system_pods.go:89] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:54:43.617089   24995 system_pods.go:89] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:54:43.617094   24995 system_pods.go:89] "etcd-ha-790780-m03" [a8ba763b-e2c8-476f-b55d-3801a6ebfddc] Running
	I0923 10:54:43.617098   24995 system_pods.go:89] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:54:43.617101   24995 system_pods.go:89] "kindnet-lzbx6" [8323d5a3-9987-4d80-a510-9a5631283d3b] Running
	I0923 10:54:43.617105   24995 system_pods.go:89] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:54:43.617108   24995 system_pods.go:89] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:54:43.617111   24995 system_pods.go:89] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:54:43.617115   24995 system_pods.go:89] "kube-apiserver-ha-790780-m03" [3d5a7d3c-744c-4ada-90f3-6273d634bb4b] Running
	I0923 10:54:43.617118   24995 system_pods.go:89] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:54:43.617123   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:54:43.617126   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m03" [b317c61a-e51d-4a01-8591-7d447395bcb5] Running
	I0923 10:54:43.617129   24995 system_pods.go:89] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:54:43.617132   24995 system_pods.go:89] "kube-proxy-rqjzc" [ea0b4964-a74f-43f0-aebf-533661bc9537] Running
	I0923 10:54:43.617136   24995 system_pods.go:89] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:54:43.617139   24995 system_pods.go:89] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:54:43.617142   24995 system_pods.go:89] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:54:43.617145   24995 system_pods.go:89] "kube-scheduler-ha-790780-m03" [1c21e524-7e5a-4c74-97e6-04dd8d61ecbb] Running
	I0923 10:54:43.617148   24995 system_pods.go:89] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:54:43.617151   24995 system_pods.go:89] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:54:43.617154   24995 system_pods.go:89] "kube-vip-ha-790780-m03" [4336e409-5c78-4af0-8575-fe659435909a] Running
	I0923 10:54:43.617157   24995 system_pods.go:89] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:54:43.617163   24995 system_pods.go:126] duration metric: took 208.855184ms to wait for k8s-apps to be running ...
	I0923 10:54:43.617173   24995 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:54:43.617217   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:54:43.635389   24995 system_svc.go:56] duration metric: took 18.194216ms WaitForService to wait for kubelet
	I0923 10:54:43.635423   24995 kubeadm.go:582] duration metric: took 25.171320686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:54:43.635447   24995 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:54:43.803841   24995 request.go:632] Waited for 168.315518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes
	I0923 10:54:43.803908   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes
	I0923 10:54:43.803913   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.803920   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.803924   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.807502   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:43.808531   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808553   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808564   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808567   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808571   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808574   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808579   24995 node_conditions.go:105] duration metric: took 173.125439ms to run NodePressure ...
	I0923 10:54:43.808592   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:54:43.808611   24995 start.go:255] writing updated cluster config ...
	I0923 10:54:43.808882   24995 ssh_runner.go:195] Run: rm -f paused
	I0923 10:54:43.860687   24995 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:54:43.862725   24995 out.go:177] * Done! kubectl is now configured to use "ha-790780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.403867528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089106403842561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f861ee6b-a670-444c-9633-f35e7ae956a0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.404276904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d07fffa-11d2-4ba4-884d-755096b60dff name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.404326310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d07fffa-11d2-4ba4-884d-755096b60dff name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.405226575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d07fffa-11d2-4ba4-884d-755096b60dff name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.446966325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4578c0ca-f878-4531-b9e6-60b3684015e6 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.447040180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4578c0ca-f878-4531-b9e6-60b3684015e6 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.448250077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa29dfda-72a1-43a1-a4d0-8978bf2bfd87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.448747647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089106448722186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa29dfda-72a1-43a1-a4d0-8978bf2bfd87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.449418991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f86061bf-613e-4ba4-a78c-b034fea22f87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.449475699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f86061bf-613e-4ba4-a78c-b034fea22f87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.449693485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f86061bf-613e-4ba4-a78c-b034fea22f87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.494911143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f64a9113-39e8-418a-a601-08e2528ddd2a name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.495002147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f64a9113-39e8-418a-a601-08e2528ddd2a name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.496592242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6839c7f-6abe-4aad-a928-18812e5d1e03 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.497073470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089106497046194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6839c7f-6abe-4aad-a928-18812e5d1e03 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.497657292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30aa6084-ae22-43f2-8dee-444635c3e90d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.497743911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30aa6084-ae22-43f2-8dee-444635c3e90d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.498030096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30aa6084-ae22-43f2-8dee-444635c3e90d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.538983017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14273d46-16ea-4c42-b3a8-787fc09752ff name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.539080170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14273d46-16ea-4c42-b3a8-787fc09752ff name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.540056956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa322765-9e21-47fb-8a27-dd23b1435283 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.540568107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089106540544447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa322765-9e21-47fb-8a27-dd23b1435283 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.541055953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b01afd74-dacd-414d-ba26-aa64aa3d19da name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.541123136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b01afd74-dacd-414d-ba26-aa64aa3d19da name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:26 ha-790780 crio[667]: time="2024-09-23 10:58:26.541332919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b01afd74-dacd-414d-ba26-aa64aa3d19da name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b6cdb320cb12       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   64b2fb317bf54       busybox-7dff88458-hmsb2
	fceea5af30884       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   7f70accb19994       coredns-7c65d6cfc9-vzhrs
	504391361e9f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e1bfaf7843489       storage-provisioner
	8f008021913ac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   61e4d18ef53ff       coredns-7c65d6cfc9-bsbth
	20dea9bfd7b93       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   12e4b7f578705       kube-proxy-jqwtw
	70e8cba43f15f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   a1aa2ae427e36       kindnet-5d9ww
	58d7d0f860c2c       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2b178d8dcf3ad       kube-vip-ha-790780
	579e069dd212e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   d632e3d4755d2       kube-scheduler-ha-790780
	4881d47948f52       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d65f8d57327b0       kube-controller-manager-ha-790780
	f13343b3ed39e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9e910662aa470       kube-apiserver-ha-790780
	621532bf94f06       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   cf20e920bbbdf       etcd-ha-790780
	
	
	==> coredns [8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927] <==
	[INFO] 10.244.1.2:59395 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000129294s
	[INFO] 10.244.1.2:33748 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00097443s
	[INFO] 10.244.0.4:46523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219823s
	[INFO] 10.244.2.2:35535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239865s
	[INFO] 10.244.2.2:36372 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017141396s
	[INFO] 10.244.2.2:50254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209403s
	[INFO] 10.244.1.2:48243 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198306s
	[INFO] 10.244.1.2:39091 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230366s
	[INFO] 10.244.1.2:49543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199975s
	[INFO] 10.244.0.4:45173 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102778s
	[INFO] 10.244.0.4:32836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736533s
	[INFO] 10.244.0.4:44659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129519s
	[INFO] 10.244.0.4:54433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098668s
	[INFO] 10.244.0.4:37772 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.2.2:43894 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134793s
	[INFO] 10.244.2.2:34604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147389s
	[INFO] 10.244.1.2:53532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242838s
	[INFO] 10.244.1.2:45804 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159901s
	[INFO] 10.244.1.2:39298 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112738s
	[INFO] 10.244.0.4:43692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093071s
	[INFO] 10.244.0.4:51414 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096722s
	[INFO] 10.244.2.2:56355 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295938s
	[INFO] 10.244.1.2:59520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142399s
	[INFO] 10.244.0.4:55347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090911s
	[INFO] 10.244.0.4:53926 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114353s
	
	
	==> coredns [fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc] <==
	[INFO] 10.244.2.2:49856 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000346472s
	[INFO] 10.244.2.2:58522 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173747s
	[INFO] 10.244.2.2:60029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181162s
	[INFO] 10.244.2.2:38618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184142s
	[INFO] 10.244.1.2:46063 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001758433s
	[INFO] 10.244.1.2:60295 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001402726s
	[INFO] 10.244.1.2:38240 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160236s
	[INFO] 10.244.1.2:41977 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113581s
	[INFO] 10.244.1.2:44892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133741s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105848s
	[INFO] 10.244.0.4:58776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144697s
	[INFO] 10.244.0.4:33311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001202009s
	[INFO] 10.244.2.2:57039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019058s
	[INFO] 10.244.2.2:57127 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153386s
	[INFO] 10.244.1.2:52843 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168874s
	[INFO] 10.244.0.4:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014121s
	[INFO] 10.244.0.4:38864 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079009s
	[INFO] 10.244.2.2:47502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158927s
	[INFO] 10.244.2.2:57106 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185408s
	[INFO] 10.244.2.2:34447 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139026s
	[INFO] 10.244.1.2:59976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015634s
	[INFO] 10.244.1.2:53446 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288738s
	[INFO] 10.244.1.2:52114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166821s
	[INFO] 10.244.0.4:54732 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099319s
	[INFO] 10.244.0.4:49290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071388s
	
	
	==> describe nodes <==
	Name:               ha-790780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_52_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-790780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4137f4910e0940f183cebcb2073b69b7
	  System UUID:                4137f491-0e09-40f1-83ce-bcb2073b69b7
	  Boot ID:                    d20b206f-6d12-4950-af76-836822976902
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmsb2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 coredns-7c65d6cfc9-bsbth             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 coredns-7c65d6cfc9-vzhrs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 etcd-ha-790780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m24s
	  kube-system                 kindnet-5d9ww                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-790780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-controller-manager-ha-790780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-proxy-jqwtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-790780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-vip-ha-790780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m17s  kube-proxy       
	  Normal  Starting                 6m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m24s  kubelet          Node ha-790780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s  kubelet          Node ha-790780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s  kubelet          Node ha-790780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m20s  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal  NodeReady                6m6s   kubelet          Node ha-790780 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal  RegisteredNode           4m3s   node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	
	
	Name:               ha-790780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_53_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:56:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-790780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87f6f3c7af44480934336376709a0c8
	  System UUID:                f87f6f3c-7af4-4480-9343-36376709a0c8
	  Boot ID:                    869cdc79-44fe-45ec-baeb-66b85d8eb577
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hdk9n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 etcd-ha-790780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m25s
	  kube-system                 kindnet-x2v9d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m27s
	  kube-system                 kube-apiserver-ha-790780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-790780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-x8fb6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-scheduler-ha-790780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-vip-ha-790780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-790780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-790780-m02 status is now: NodeNotReady
	
	
	Name:               ha-790780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_54_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:54:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-790780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a2525d1b15b4365a533b4fbbc7d76d5
	  System UUID:                8a2525d1-b15b-4365-a533-b4fbbc7d76d5
	  Boot ID:                    a7b3ffe3-56b6-4c77-b8bb-b94fecea7ce9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2f4vm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 etcd-ha-790780-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m10s
	  kube-system                 kindnet-lzbx6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-790780-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-790780-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-proxy-rqjzc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-790780-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-vip-ha-790780-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node ha-790780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	
	
	Name:               ha-790780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_55_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:55:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-790780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8bb8bb71d764d5397c864a970ca06f0
	  System UUID:                a8bb8bb7-1d76-4d53-97c8-64a970ca06f0
	  Boot ID:                    43fa98cd-88cb-492d-a6f8-c4d1f11bcb1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sz6cc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-58k4g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  Starting                 3m2s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m2s)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m2s)  kubelet          Node ha-790780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m2s)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m                   node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  NodeReady                2m40s                kubelet          Node ha-790780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 10:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050514] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040290] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.807632] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.451360] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.609594] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.519719] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055679] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057192] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.186843] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114356] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.269409] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.949380] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.106869] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.060266] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 10:52] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.081963] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.787202] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.501695] kauditd_printk_skb: 41 callbacks suppressed
	[Sep23 10:53] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989] <==
	{"level":"warn","ts":"2024-09-23T10:58:26.605286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.705182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.816853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.829999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.840077Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.854223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.858863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.862664Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.872014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.878288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.885002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.888519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.893173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.896424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.902650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.904531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.909183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.916176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.921118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.925449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.930774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.940530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.948037Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:26.996102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:27.005059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:58:27 up 6 min,  0 users,  load average: 0.29, 0.35, 0.18
	Linux ha-790780 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9] <==
	I0923 10:57:49.675551       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:57:59.683520       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:57:59.683566       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 10:57:59.683731       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:57:59.683766       1 main.go:299] handling current node
	I0923 10:57:59.683782       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:57:59.683789       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:57:59.683861       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:57:59.683870       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:09.674500       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:58:09.674559       1 main.go:299] handling current node
	I0923 10:58:09.674578       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:58:09.674587       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:58:09.674781       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:58:09.674808       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:09.674853       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:58:09.674859       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 10:58:19.676409       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:58:19.676470       1 main.go:299] handling current node
	I0923 10:58:19.676501       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:58:19.676506       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:58:19.676695       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:58:19.676726       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:19.676792       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:58:19.676813       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d] <==
	I0923 10:52:02.470272       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 10:52:02.487288       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 10:52:02.636999       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 10:52:06.966628       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 10:52:07.024027       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0923 10:54:15.771868       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.772121       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.642µs, panicked: false, err: <nil>, panic-reason: <nil>" logger="UnhandledError"
	E0923 10:54:15.773436       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.774650       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.775958       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.219249ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0923 10:54:50.840870       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42568: use of closed network connection
	E0923 10:54:51.046928       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42582: use of closed network connection
	E0923 10:54:51.239325       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42598: use of closed network connection
	E0923 10:54:51.469344       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42622: use of closed network connection
	E0923 10:54:51.662336       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42652: use of closed network connection
	E0923 10:54:51.840022       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42678: use of closed network connection
	E0923 10:54:52.023650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42708: use of closed network connection
	E0923 10:54:52.216046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42724: use of closed network connection
	E0923 10:54:52.402748       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42750: use of closed network connection
	E0923 10:54:52.693691       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42788: use of closed network connection
	E0923 10:54:52.868191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42814: use of closed network connection
	E0923 10:54:53.230910       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42838: use of closed network connection
	E0923 10:54:53.405713       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42860: use of closed network connection
	E0923 10:54:53.587256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42870: use of closed network connection
	W0923 10:56:21.308721       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.234]
	
	
	==> kube-controller-manager [4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d] <==
	I0923 10:55:25.124525       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-790780-m04" podCIDRs=["10.244.3.0/24"]
	I0923 10:55:25.124586       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.124620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.133509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.356496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.728032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:26.243588       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-790780-m04"
	I0923 10:55:26.283171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:27.507667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:27.553251       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:28.470149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:28.543154       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:35.178257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.206243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.206426       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-790780-m04"
	I0923 10:55:46.224292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.262261       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:55.382846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:56:46.290698       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-790780-m04"
	I0923 10:56:46.290858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:46.314933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:46.418190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.658083ms"
	I0923 10:56:46.418270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.621µs"
	I0923 10:56:48.568648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:51.466837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	
	
	==> kube-proxy [20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 10:52:09.262552       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 10:52:09.284499       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.234"]
	E0923 10:52:09.284588       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:52:09.317271       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 10:52:09.317394       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 10:52:09.317457       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:52:09.320801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:52:09.321989       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:52:09.322038       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:52:09.326499       1 config.go:199] "Starting service config controller"
	I0923 10:52:09.327483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:52:09.328524       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:52:09.328570       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:52:09.331934       1 config.go:328] "Starting node config controller"
	I0923 10:52:09.331976       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:52:09.428869       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:52:09.429192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:52:09.432816       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e] <==
	E0923 10:52:00.723488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:52:00.842918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:52:00.843015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:52:03.091035       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 10:54:44.751853       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8af6924d-0142-47f2-8cbe-927fbdaa50d7" pod="default/busybox-7dff88458-hdk9n" assumedNode="ha-790780-m02" currentNode="ha-790780-m03"
	E0923 10:54:44.780763       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hdk9n\": pod busybox-7dff88458-hdk9n is already assigned to node \"ha-790780-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hdk9n" node="ha-790780-m03"
	E0923 10:54:44.781985       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8af6924d-0142-47f2-8cbe-927fbdaa50d7(default/busybox-7dff88458-hdk9n) was assumed on ha-790780-m03 but assigned to ha-790780-m02" pod="default/busybox-7dff88458-hdk9n"
	E0923 10:54:44.782087       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hdk9n\": pod busybox-7dff88458-hdk9n is already assigned to node \"ha-790780-m02\"" pod="default/busybox-7dff88458-hdk9n"
	I0923 10:54:44.782173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hdk9n" node="ha-790780-m02"
	E0923 10:55:25.174653       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xmfxv\": pod kindnet-xmfxv is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xmfxv" node="ha-790780-m04"
	E0923 10:55:25.174983       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xmfxv\": pod kindnet-xmfxv is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-xmfxv"
	E0923 10:55:25.175545       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-58k4g\": pod kube-proxy-58k4g is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-58k4g" node="ha-790780-m04"
	E0923 10:55:25.178321       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-58k4g\": pod kube-proxy-58k4g is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-58k4g"
	E0923 10:55:25.223677       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.224053       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 143d16c9-72ab-4693-86a9-227280e3d88b(kube-system/kindnet-rhmrv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rhmrv"
	E0923 10:55:25.224238       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-rhmrv"
	I0923 10:55:25.224407       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.257675       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.257807       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 20bf7e97-ed43-402a-b267-4c1d2f4b5bbf(kube-system/kindnet-sz6cc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sz6cc"
	E0923 10:55:25.257863       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-sz6cc"
	I0923 10:55:25.257906       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.260301       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	E0923 10:55:25.260462       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6f2d4b5-c6d7-4f34-b81a-2644640ae3bb(kube-system/kube-proxy-ghvw7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvw7"
	E0923 10:55:25.260529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-ghvw7"
	I0923 10:55:25.260575       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	
	
	==> kubelet <==
	Sep 23 10:57:02 ha-790780 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 10:57:02 ha-790780 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 10:57:02 ha-790780 kubelet[1310]: E0923 10:57:02.752554    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089022751963172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:02 ha-790780 kubelet[1310]: E0923 10:57:02.752656    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089022751963172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:12 ha-790780 kubelet[1310]: E0923 10:57:12.759306    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089032758260960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:12 ha-790780 kubelet[1310]: E0923 10:57:12.759943    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089032758260960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:22 ha-790780 kubelet[1310]: E0923 10:57:22.761662    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089042761344235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:22 ha-790780 kubelet[1310]: E0923 10:57:22.761739    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089042761344235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:32 ha-790780 kubelet[1310]: E0923 10:57:32.763857    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089052763529781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:32 ha-790780 kubelet[1310]: E0923 10:57:32.763900    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089052763529781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:42 ha-790780 kubelet[1310]: E0923 10:57:42.767538    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089062766959170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:42 ha-790780 kubelet[1310]: E0923 10:57:42.767974    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089062766959170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:52 ha-790780 kubelet[1310]: E0923 10:57:52.770316    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089072770030326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:52 ha-790780 kubelet[1310]: E0923 10:57:52.770429    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089072770030326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.632462    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 10:58:02 ha-790780 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.773513    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089082773175802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.773536    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089082773175802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:12 ha-790780 kubelet[1310]: E0923 10:58:12.775728    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089092775452254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:12 ha-790780 kubelet[1310]: E0923 10:58:12.775771    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089092775452254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:22 ha-790780 kubelet[1310]: E0923 10:58:22.777799    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089102777431416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:22 ha-790780 kubelet[1310]: E0923 10:58:22.778161    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089102777431416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-790780 -n ha-790780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-790780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.375871991s)
ha_test.go:413: expected profile "ha-790780" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-790780\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-790780\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-790780\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.234\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.43\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.128\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.134\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\
"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\
":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-790780 -n ha-790780
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 logs -n 25: (1.411950295s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780:/home/docker/cp-test_ha-790780-m03_ha-790780.txt                      |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780 sudo cat                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780.txt                                |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m04 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp testdata/cp-test.txt                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780:/home/docker/cp-test_ha-790780-m04_ha-790780.txt                      |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780 sudo cat                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780.txt                                |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03:/home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m03 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-790780 node stop m02 -v=7                                                    | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:51:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:51:23.890810   24995 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:51:23.891041   24995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:51:23.891049   24995 out.go:358] Setting ErrFile to fd 2...
	I0923 10:51:23.891053   24995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:51:23.891205   24995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:51:23.891746   24995 out.go:352] Setting JSON to false
	I0923 10:51:23.892628   24995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2027,"bootTime":1727086657,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:51:23.892719   24995 start.go:139] virtualization: kvm guest
	I0923 10:51:23.894714   24995 out.go:177] * [ha-790780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:51:23.896009   24995 notify.go:220] Checking for updates...
	I0923 10:51:23.896015   24995 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:51:23.897316   24995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:51:23.898483   24995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:51:23.899745   24995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:23.900930   24995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:51:23.902097   24995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:51:23.903412   24995 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:51:23.936575   24995 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 10:51:23.937738   24995 start.go:297] selected driver: kvm2
	I0923 10:51:23.937760   24995 start.go:901] validating driver "kvm2" against <nil>
	I0923 10:51:23.937777   24995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:51:23.938571   24995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:51:23.938654   24995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 10:51:23.953375   24995 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 10:51:23.953445   24995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:51:23.953711   24995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:51:23.953749   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:51:23.953813   24995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 10:51:23.953825   24995 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:51:23.953893   24995 start.go:340] cluster config:
	{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 10:51:23.954007   24995 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:51:23.956292   24995 out.go:177] * Starting "ha-790780" primary control-plane node in "ha-790780" cluster
	I0923 10:51:23.957482   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:51:23.957517   24995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:51:23.957529   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:51:23.957599   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:51:23.957611   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:51:23.957934   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:51:23.957961   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json: {Name:mk715d227144254f94a596853caa0306f08b9846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:23.958130   24995 start.go:360] acquireMachinesLock for ha-790780: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:51:23.958172   24995 start.go:364] duration metric: took 22.743µs to acquireMachinesLock for "ha-790780"
	I0923 10:51:23.958195   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:51:23.958264   24995 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 10:51:23.959776   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:51:23.959913   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:51:23.959959   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:51:23.974405   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0923 10:51:23.974852   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:51:23.975494   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:51:23.975517   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:51:23.975789   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:51:23.975953   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:23.976064   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:23.976227   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:51:23.976305   24995 client.go:168] LocalClient.Create starting
	I0923 10:51:23.976394   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:51:23.976453   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:51:23.976474   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:51:23.976558   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:51:23.976590   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:51:23.976607   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:51:23.976637   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:51:23.976646   24995 main.go:141] libmachine: (ha-790780) Calling .PreCreateCheck
	I0923 10:51:23.976933   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:23.977298   24995 main.go:141] libmachine: Creating machine...
	I0923 10:51:23.977310   24995 main.go:141] libmachine: (ha-790780) Calling .Create
	I0923 10:51:23.977514   24995 main.go:141] libmachine: (ha-790780) Creating KVM machine...
	I0923 10:51:23.978674   24995 main.go:141] libmachine: (ha-790780) DBG | found existing default KVM network
	I0923 10:51:23.979392   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:23.979247   25018 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0923 10:51:23.979430   24995 main.go:141] libmachine: (ha-790780) DBG | created network xml: 
	I0923 10:51:23.979450   24995 main.go:141] libmachine: (ha-790780) DBG | <network>
	I0923 10:51:23.979460   24995 main.go:141] libmachine: (ha-790780) DBG |   <name>mk-ha-790780</name>
	I0923 10:51:23.979472   24995 main.go:141] libmachine: (ha-790780) DBG |   <dns enable='no'/>
	I0923 10:51:23.979483   24995 main.go:141] libmachine: (ha-790780) DBG |   
	I0923 10:51:23.979494   24995 main.go:141] libmachine: (ha-790780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 10:51:23.979499   24995 main.go:141] libmachine: (ha-790780) DBG |     <dhcp>
	I0923 10:51:23.979504   24995 main.go:141] libmachine: (ha-790780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 10:51:23.979512   24995 main.go:141] libmachine: (ha-790780) DBG |     </dhcp>
	I0923 10:51:23.979520   24995 main.go:141] libmachine: (ha-790780) DBG |   </ip>
	I0923 10:51:23.979526   24995 main.go:141] libmachine: (ha-790780) DBG |   
	I0923 10:51:23.979532   24995 main.go:141] libmachine: (ha-790780) DBG | </network>
	I0923 10:51:23.979541   24995 main.go:141] libmachine: (ha-790780) DBG | 
	I0923 10:51:23.984532   24995 main.go:141] libmachine: (ha-790780) DBG | trying to create private KVM network mk-ha-790780 192.168.39.0/24...
	I0923 10:51:24.046915   24995 main.go:141] libmachine: (ha-790780) DBG | private KVM network mk-ha-790780 192.168.39.0/24 created
	I0923 10:51:24.046951   24995 main.go:141] libmachine: (ha-790780) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 ...
	I0923 10:51:24.046970   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.046901   25018 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:24.046982   24995 main.go:141] libmachine: (ha-790780) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:51:24.047052   24995 main.go:141] libmachine: (ha-790780) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:51:24.290133   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.289993   25018 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa...
	I0923 10:51:24.626743   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.626586   25018 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/ha-790780.rawdisk...
	I0923 10:51:24.626779   24995 main.go:141] libmachine: (ha-790780) DBG | Writing magic tar header
	I0923 10:51:24.626794   24995 main.go:141] libmachine: (ha-790780) DBG | Writing SSH key tar header
	I0923 10:51:24.626805   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.626737   25018 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 ...
	I0923 10:51:24.626913   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 (perms=drwx------)
	I0923 10:51:24.626940   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780
	I0923 10:51:24.626950   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:51:24.626966   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:51:24.626976   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:51:24.626990   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:51:24.627002   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:51:24.627026   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:51:24.627037   24995 main.go:141] libmachine: (ha-790780) Creating domain...
	I0923 10:51:24.627047   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:24.627061   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:51:24.627079   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:51:24.627093   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:51:24.627102   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home
	I0923 10:51:24.627113   24995 main.go:141] libmachine: (ha-790780) DBG | Skipping /home - not owner
	I0923 10:51:24.628104   24995 main.go:141] libmachine: (ha-790780) define libvirt domain using xml: 
	I0923 10:51:24.628127   24995 main.go:141] libmachine: (ha-790780) <domain type='kvm'>
	I0923 10:51:24.628137   24995 main.go:141] libmachine: (ha-790780)   <name>ha-790780</name>
	I0923 10:51:24.628145   24995 main.go:141] libmachine: (ha-790780)   <memory unit='MiB'>2200</memory>
	I0923 10:51:24.628153   24995 main.go:141] libmachine: (ha-790780)   <vcpu>2</vcpu>
	I0923 10:51:24.628162   24995 main.go:141] libmachine: (ha-790780)   <features>
	I0923 10:51:24.628169   24995 main.go:141] libmachine: (ha-790780)     <acpi/>
	I0923 10:51:24.628175   24995 main.go:141] libmachine: (ha-790780)     <apic/>
	I0923 10:51:24.628183   24995 main.go:141] libmachine: (ha-790780)     <pae/>
	I0923 10:51:24.628200   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628210   24995 main.go:141] libmachine: (ha-790780)   </features>
	I0923 10:51:24.628219   24995 main.go:141] libmachine: (ha-790780)   <cpu mode='host-passthrough'>
	I0923 10:51:24.628231   24995 main.go:141] libmachine: (ha-790780)   
	I0923 10:51:24.628242   24995 main.go:141] libmachine: (ha-790780)   </cpu>
	I0923 10:51:24.628248   24995 main.go:141] libmachine: (ha-790780)   <os>
	I0923 10:51:24.628256   24995 main.go:141] libmachine: (ha-790780)     <type>hvm</type>
	I0923 10:51:24.628266   24995 main.go:141] libmachine: (ha-790780)     <boot dev='cdrom'/>
	I0923 10:51:24.628274   24995 main.go:141] libmachine: (ha-790780)     <boot dev='hd'/>
	I0923 10:51:24.628283   24995 main.go:141] libmachine: (ha-790780)     <bootmenu enable='no'/>
	I0923 10:51:24.628289   24995 main.go:141] libmachine: (ha-790780)   </os>
	I0923 10:51:24.628298   24995 main.go:141] libmachine: (ha-790780)   <devices>
	I0923 10:51:24.628316   24995 main.go:141] libmachine: (ha-790780)     <disk type='file' device='cdrom'>
	I0923 10:51:24.628332   24995 main.go:141] libmachine: (ha-790780)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/boot2docker.iso'/>
	I0923 10:51:24.628339   24995 main.go:141] libmachine: (ha-790780)       <target dev='hdc' bus='scsi'/>
	I0923 10:51:24.628343   24995 main.go:141] libmachine: (ha-790780)       <readonly/>
	I0923 10:51:24.628348   24995 main.go:141] libmachine: (ha-790780)     </disk>
	I0923 10:51:24.628352   24995 main.go:141] libmachine: (ha-790780)     <disk type='file' device='disk'>
	I0923 10:51:24.628365   24995 main.go:141] libmachine: (ha-790780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:51:24.628374   24995 main.go:141] libmachine: (ha-790780)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/ha-790780.rawdisk'/>
	I0923 10:51:24.628379   24995 main.go:141] libmachine: (ha-790780)       <target dev='hda' bus='virtio'/>
	I0923 10:51:24.628383   24995 main.go:141] libmachine: (ha-790780)     </disk>
	I0923 10:51:24.628388   24995 main.go:141] libmachine: (ha-790780)     <interface type='network'>
	I0923 10:51:24.628398   24995 main.go:141] libmachine: (ha-790780)       <source network='mk-ha-790780'/>
	I0923 10:51:24.628422   24995 main.go:141] libmachine: (ha-790780)       <model type='virtio'/>
	I0923 10:51:24.628441   24995 main.go:141] libmachine: (ha-790780)     </interface>
	I0923 10:51:24.628451   24995 main.go:141] libmachine: (ha-790780)     <interface type='network'>
	I0923 10:51:24.628456   24995 main.go:141] libmachine: (ha-790780)       <source network='default'/>
	I0923 10:51:24.628464   24995 main.go:141] libmachine: (ha-790780)       <model type='virtio'/>
	I0923 10:51:24.628468   24995 main.go:141] libmachine: (ha-790780)     </interface>
	I0923 10:51:24.628474   24995 main.go:141] libmachine: (ha-790780)     <serial type='pty'>
	I0923 10:51:24.628489   24995 main.go:141] libmachine: (ha-790780)       <target port='0'/>
	I0923 10:51:24.628497   24995 main.go:141] libmachine: (ha-790780)     </serial>
	I0923 10:51:24.628501   24995 main.go:141] libmachine: (ha-790780)     <console type='pty'>
	I0923 10:51:24.628509   24995 main.go:141] libmachine: (ha-790780)       <target type='serial' port='0'/>
	I0923 10:51:24.628513   24995 main.go:141] libmachine: (ha-790780)     </console>
	I0923 10:51:24.628518   24995 main.go:141] libmachine: (ha-790780)     <rng model='virtio'>
	I0923 10:51:24.628524   24995 main.go:141] libmachine: (ha-790780)       <backend model='random'>/dev/random</backend>
	I0923 10:51:24.628536   24995 main.go:141] libmachine: (ha-790780)     </rng>
	I0923 10:51:24.628558   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628571   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628577   24995 main.go:141] libmachine: (ha-790780)   </devices>
	I0923 10:51:24.628588   24995 main.go:141] libmachine: (ha-790780) </domain>
	I0923 10:51:24.628594   24995 main.go:141] libmachine: (ha-790780) 
	I0923 10:51:24.633208   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:13:36:c6 in network default
	I0923 10:51:24.633757   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:24.633774   24995 main.go:141] libmachine: (ha-790780) Ensuring networks are active...
	I0923 10:51:24.634465   24995 main.go:141] libmachine: (ha-790780) Ensuring network default is active
	I0923 10:51:24.634776   24995 main.go:141] libmachine: (ha-790780) Ensuring network mk-ha-790780 is active
	I0923 10:51:24.635311   24995 main.go:141] libmachine: (ha-790780) Getting domain xml...
	I0923 10:51:24.635925   24995 main.go:141] libmachine: (ha-790780) Creating domain...
	I0923 10:51:25.814040   24995 main.go:141] libmachine: (ha-790780) Waiting to get IP...
	I0923 10:51:25.814916   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:25.815340   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:25.815417   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:25.815355   25018 retry.go:31] will retry after 302.426541ms: waiting for machine to come up
	I0923 10:51:26.119886   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.120307   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.120331   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.120269   25018 retry.go:31] will retry after 296.601666ms: waiting for machine to come up
	I0923 10:51:26.418700   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.419028   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.419055   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.418981   25018 retry.go:31] will retry after 377.849162ms: waiting for machine to come up
	I0923 10:51:26.798501   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.798922   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.798948   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.798856   25018 retry.go:31] will retry after 450.118776ms: waiting for machine to come up
	I0923 10:51:27.250394   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:27.250790   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:27.250808   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:27.250758   25018 retry.go:31] will retry after 570.631994ms: waiting for machine to come up
	I0923 10:51:27.822428   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:27.822886   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:27.822908   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:27.822851   25018 retry.go:31] will retry after 623.272262ms: waiting for machine to come up
	I0923 10:51:28.447752   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:28.448147   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:28.448174   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:28.448108   25018 retry.go:31] will retry after 1.077429863s: waiting for machine to come up
	I0923 10:51:29.527061   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:29.527469   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:29.527505   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:29.527430   25018 retry.go:31] will retry after 917.693346ms: waiting for machine to come up
	I0923 10:51:30.446246   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:30.446572   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:30.446596   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:30.446529   25018 retry.go:31] will retry after 1.557196838s: waiting for machine to come up
	I0923 10:51:32.006148   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:32.006519   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:32.006543   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:32.006479   25018 retry.go:31] will retry after 2.085720919s: waiting for machine to come up
	I0923 10:51:34.093658   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:34.094039   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:34.094071   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:34.093997   25018 retry.go:31] will retry after 2.432097525s: waiting for machine to come up
	I0923 10:51:36.529456   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:36.529801   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:36.529829   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:36.529771   25018 retry.go:31] will retry after 3.373414151s: waiting for machine to come up
	I0923 10:51:39.904476   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:39.904832   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:39.904859   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:39.904782   25018 retry.go:31] will retry after 4.54310411s: waiting for machine to come up
	I0923 10:51:44.449079   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.449524   24995 main.go:141] libmachine: (ha-790780) Found IP for machine: 192.168.39.234
	I0923 10:51:44.449566   24995 main.go:141] libmachine: (ha-790780) Reserving static IP address...
	I0923 10:51:44.449583   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has current primary IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.449899   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find host DHCP lease matching {name: "ha-790780", mac: "52:54:00:56:51:7d", ip: "192.168.39.234"} in network mk-ha-790780
	I0923 10:51:44.518563   24995 main.go:141] libmachine: (ha-790780) DBG | Getting to WaitForSSH function...
	I0923 10:51:44.518595   24995 main.go:141] libmachine: (ha-790780) Reserved static IP address: 192.168.39.234
	I0923 10:51:44.518615   24995 main.go:141] libmachine: (ha-790780) Waiting for SSH to be available...
	I0923 10:51:44.520920   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.521300   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.521330   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.521451   24995 main.go:141] libmachine: (ha-790780) DBG | Using SSH client type: external
	I0923 10:51:44.521486   24995 main.go:141] libmachine: (ha-790780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa (-rw-------)
	I0923 10:51:44.521531   24995 main.go:141] libmachine: (ha-790780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:51:44.521546   24995 main.go:141] libmachine: (ha-790780) DBG | About to run SSH command:
	I0923 10:51:44.521554   24995 main.go:141] libmachine: (ha-790780) DBG | exit 0
	I0923 10:51:44.645412   24995 main.go:141] libmachine: (ha-790780) DBG | SSH cmd err, output: <nil>: 
	I0923 10:51:44.645692   24995 main.go:141] libmachine: (ha-790780) KVM machine creation complete!
	I0923 10:51:44.645984   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:44.646583   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:44.646744   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:44.646893   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:51:44.646905   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:51:44.648172   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:51:44.648194   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:51:44.648202   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:51:44.648210   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.650665   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.650987   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.651020   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.651139   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.651308   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.651457   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.651573   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.651700   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.651893   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.651906   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:51:44.756746   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:51:44.756773   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:51:44.756782   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.759344   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.759648   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.759681   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.759843   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.760022   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.760232   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.760420   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.760578   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.760787   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.760799   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:51:44.870171   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:51:44.870267   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:51:44.870273   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:51:44.870280   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:44.870545   24995 buildroot.go:166] provisioning hostname "ha-790780"
	I0923 10:51:44.870571   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:44.870747   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.873216   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.873593   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.873628   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.873723   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.873886   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.874025   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.874142   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.874274   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.874442   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.874453   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780 && echo "ha-790780" | sudo tee /etc/hostname
	I0923 10:51:44.995765   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780
	
	I0923 10:51:44.995787   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.998312   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.998668   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.998696   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.998853   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.999016   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.999145   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.999274   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.999435   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.999654   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.999678   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:51:45.115136   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:51:45.115177   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:51:45.115207   24995 buildroot.go:174] setting up certificates
	I0923 10:51:45.115216   24995 provision.go:84] configureAuth start
	I0923 10:51:45.115226   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:45.115475   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.117929   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.118257   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.118279   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.118435   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.120330   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.120597   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.120620   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.120789   24995 provision.go:143] copyHostCerts
	I0923 10:51:45.120818   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:51:45.120862   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:51:45.120884   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:51:45.120966   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:51:45.121085   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:51:45.121144   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:51:45.121152   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:51:45.121191   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:51:45.121264   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:51:45.121286   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:51:45.121292   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:51:45.121321   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:51:45.121410   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780 san=[127.0.0.1 192.168.39.234 ha-790780 localhost minikube]
	I0923 10:51:45.266715   24995 provision.go:177] copyRemoteCerts
	I0923 10:51:45.266777   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:51:45.266798   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.269666   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.269959   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.269988   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.270213   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.270378   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.270482   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.270568   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.355778   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:51:45.355843   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:51:45.380730   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:51:45.380795   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 10:51:45.414661   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:51:45.414743   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:51:45.441465   24995 provision.go:87] duration metric: took 326.238007ms to configureAuth
	I0923 10:51:45.441495   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:51:45.441678   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:51:45.441758   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.444126   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.444463   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.444481   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.444672   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.444841   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.445006   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.445137   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.445259   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:45.445469   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:45.445484   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:51:45.681011   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:51:45.681063   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:51:45.681071   24995 main.go:141] libmachine: (ha-790780) Calling .GetURL
	I0923 10:51:45.682285   24995 main.go:141] libmachine: (ha-790780) DBG | Using libvirt version 6000000
	I0923 10:51:45.684579   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.684908   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.684938   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.685089   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:51:45.685101   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:51:45.685107   24995 client.go:171] duration metric: took 21.708786455s to LocalClient.Create
	I0923 10:51:45.685125   24995 start.go:167] duration metric: took 21.708900673s to libmachine.API.Create "ha-790780"
	I0923 10:51:45.685138   24995 start.go:293] postStartSetup for "ha-790780" (driver="kvm2")
	I0923 10:51:45.685151   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:51:45.685172   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.685421   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:51:45.685449   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.687596   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.687908   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.687933   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.688073   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.688250   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.688408   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.688548   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.771920   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:51:45.776355   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:51:45.776391   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:51:45.776469   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:51:45.776563   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:51:45.776575   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:51:45.776693   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:51:45.786199   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:51:45.811518   24995 start.go:296] duration metric: took 126.349059ms for postStartSetup
	I0923 10:51:45.811609   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:45.812294   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.815129   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.815486   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.815514   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.815712   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:51:45.815895   24995 start.go:128] duration metric: took 21.857620166s to createHost
	I0923 10:51:45.815920   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.818316   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.818630   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.818651   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.818850   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.819010   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.819165   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.819278   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.819431   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:45.819590   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:45.819599   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:51:45.926174   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088705.899223528
	
	I0923 10:51:45.926195   24995 fix.go:216] guest clock: 1727088705.899223528
	I0923 10:51:45.926202   24995 fix.go:229] Guest: 2024-09-23 10:51:45.899223528 +0000 UTC Remote: 2024-09-23 10:51:45.81591122 +0000 UTC m=+21.959703843 (delta=83.312308ms)
	I0923 10:51:45.926237   24995 fix.go:200] guest clock delta is within tolerance: 83.312308ms
	I0923 10:51:45.926247   24995 start.go:83] releasing machines lock for "ha-790780", held for 21.968060369s
	I0923 10:51:45.926269   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.926484   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.929017   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.929273   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.929296   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.929451   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.929900   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.930074   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.930159   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:51:45.930211   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.930270   24995 ssh_runner.go:195] Run: cat /version.json
	I0923 10:51:45.930294   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.932829   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933159   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.933185   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933203   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933326   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.933490   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.933624   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.933676   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.933701   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933776   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.934053   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.934206   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.934327   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.934455   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:46.030649   24995 ssh_runner.go:195] Run: systemctl --version
	I0923 10:51:46.036429   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:51:46.192093   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:51:46.197962   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:51:46.198029   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:51:46.215140   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:51:46.215162   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:51:46.215243   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:51:46.230784   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:51:46.244349   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:51:46.244409   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:51:46.258034   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:51:46.272100   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:51:46.381469   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:51:46.539101   24995 docker.go:233] disabling docker service ...
	I0923 10:51:46.539174   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:51:46.552908   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:51:46.565651   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:51:46.682294   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:51:46.796364   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:51:46.811412   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:51:46.829576   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:51:46.829645   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.839695   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:51:46.839786   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.849955   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.860106   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.870333   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:51:46.880826   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.891077   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.908248   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.918775   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:51:46.928824   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:51:46.928877   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:51:46.941980   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:51:46.951517   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:51:47.065808   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:51:47.163613   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:51:47.163683   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:51:47.168401   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:51:47.168449   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:51:47.172083   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:51:47.211404   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:51:47.211475   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:51:47.237894   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:51:47.265905   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:51:47.267109   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:47.269676   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:47.269976   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:47.269998   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:47.270189   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:51:47.274345   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:51:47.287451   24995 kubeadm.go:883] updating cluster {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:51:47.287548   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:51:47.287587   24995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:51:47.320493   24995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 10:51:47.320563   24995 ssh_runner.go:195] Run: which lz4
	I0923 10:51:47.324493   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 10:51:47.324590   24995 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 10:51:47.328614   24995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 10:51:47.328641   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 10:51:48.664218   24995 crio.go:462] duration metric: took 1.339658259s to copy over tarball
	I0923 10:51:48.664282   24995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 10:51:50.637991   24995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.973686302s)
	I0923 10:51:50.638022   24995 crio.go:469] duration metric: took 1.973779288s to extract the tarball
	I0923 10:51:50.638029   24995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 10:51:50.675284   24995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:51:50.719521   24995 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:51:50.719546   24995 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:51:50.719554   24995 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.31.1 crio true true} ...
	I0923 10:51:50.719685   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:51:50.719772   24995 ssh_runner.go:195] Run: crio config
	I0923 10:51:50.771719   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:51:50.771741   24995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 10:51:50.771749   24995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:51:50.771771   24995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-790780 NodeName:ha-790780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:51:50.771891   24995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-790780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:51:50.771915   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:51:50.771953   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:51:50.788554   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:51:50.788662   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:51:50.788713   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:51:50.798905   24995 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:51:50.798967   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 10:51:50.808504   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 10:51:50.825113   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:51:50.841896   24995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 10:51:50.858441   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 10:51:50.875731   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:51:50.879691   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:51:50.892112   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:51:51.019767   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:51:51.037039   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.234
	I0923 10:51:51.037069   24995 certs.go:194] generating shared ca certs ...
	I0923 10:51:51.037091   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.037268   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:51:51.037324   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:51:51.037339   24995 certs.go:256] generating profile certs ...
	I0923 10:51:51.037431   24995 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:51:51.037451   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt with IP's: []
	I0923 10:51:51.451020   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt ...
	I0923 10:51:51.451047   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt: {Name:mk7c4e9362162608bb6c01090da1551aaa823d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.451244   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key ...
	I0923 10:51:51.451267   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key: {Name:mkcd6bfa32a894b89017c31deaa26203b3b4a176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.451372   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888
	I0923 10:51:51.451392   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.254]
	I0923 10:51:51.607359   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 ...
	I0923 10:51:51.607386   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888: {Name:mka1f4b6ed48e33311f672d8b442f93c1d7c681f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.607561   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888 ...
	I0923 10:51:51.607580   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888: {Name:mk49e13f50fd1588f0bd343a1960a01127e6eea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.607676   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:51:51.607836   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:51:51.607925   24995 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:51:51.607944   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt with IP's: []
	I0923 10:51:51.677169   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt ...
	I0923 10:51:51.677196   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt: {Name:mkd6d1ef61128b90a97b097c5fd8695ddf079ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.677369   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key ...
	I0923 10:51:51.677400   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key: {Name:mk47fffc62dd3ae10bfeea7ae4b46f34ad5c053e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.677517   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:51:51.677535   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:51:51.677548   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:51:51.677618   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:51:51.677647   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:51:51.677668   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:51:51.677686   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:51:51.677703   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:51:51.677763   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:51:51.677808   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:51:51.677821   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:51:51.677855   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:51:51.677884   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:51:51.677916   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:51:51.677966   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:51:51.678003   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:51.678023   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:51:51.678049   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.679006   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:51:51.705139   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:51:51.728566   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:51:51.751552   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:51:51.775089   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 10:51:51.801987   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:51:51.826155   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:51:51.852767   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:51:51.876344   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:51:51.905311   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:51:51.928779   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:51:51.952260   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:51:51.969409   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:51:51.975384   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:51:51.986501   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.990964   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.991023   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.996747   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:51:52.007942   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:51:52.018842   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.023215   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.023268   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.028919   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:51:52.039648   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:51:52.050482   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.054942   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.054996   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.061057   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:51:52.072692   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:51:52.076951   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:51:52.077018   24995 kubeadm.go:392] StartCluster: {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:51:52.077118   24995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:51:52.077175   24995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:51:52.116347   24995 cri.go:89] found id: ""
	I0923 10:51:52.116428   24995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:51:52.126761   24995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:51:52.140367   24995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:51:52.152008   24995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:51:52.152029   24995 kubeadm.go:157] found existing configuration files:
	
	I0923 10:51:52.152082   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:51:52.162100   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:51:52.162178   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:51:52.172716   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:51:52.182352   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:51:52.182416   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:51:52.192324   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:51:52.201509   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:51:52.201567   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:51:52.211076   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:51:52.220241   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:51:52.220301   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:51:52.229931   24995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:51:52.330228   24995 kubeadm.go:310] W0923 10:51:52.311529     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:51:52.331060   24995 kubeadm.go:310] W0923 10:51:52.312477     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:51:52.439125   24995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:52:03.033231   24995 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:52:03.033332   24995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:52:03.033492   24995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:52:03.033623   24995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:52:03.033751   24995 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:52:03.033844   24995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:52:03.035457   24995 out.go:235]   - Generating certificates and keys ...
	I0923 10:52:03.035550   24995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:52:03.035642   24995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:52:03.035741   24995 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:52:03.035823   24995 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:52:03.035900   24995 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:52:03.035992   24995 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:52:03.036084   24995 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:52:03.036211   24995 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-790780 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0923 10:52:03.036285   24995 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:52:03.036444   24995 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-790780 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0923 10:52:03.036563   24995 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:52:03.036657   24995 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:52:03.036710   24995 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:52:03.036757   24995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:52:03.036842   24995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:52:03.036923   24995 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:52:03.037009   24995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:52:03.037098   24995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:52:03.037182   24995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:52:03.037302   24995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:52:03.037427   24995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:52:03.038904   24995 out.go:235]   - Booting up control plane ...
	I0923 10:52:03.039001   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:52:03.039082   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:52:03.039176   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:52:03.039295   24995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:52:03.039422   24995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:52:03.039482   24995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:52:03.039635   24995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:52:03.039761   24995 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:52:03.039849   24995 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.524673ms
	I0923 10:52:03.039940   24995 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:52:03.040024   24995 kubeadm.go:310] [api-check] The API server is healthy after 5.986201438s
	I0923 10:52:03.040175   24995 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:52:03.040361   24995 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:52:03.040444   24995 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:52:03.040632   24995 kubeadm.go:310] [mark-control-plane] Marking the node ha-790780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:52:03.040704   24995 kubeadm.go:310] [bootstrap-token] Using token: xsoed2.p6r9ib7q4k96hg0w
	I0923 10:52:03.042019   24995 out.go:235]   - Configuring RBAC rules ...
	I0923 10:52:03.042101   24995 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:52:03.042173   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:52:03.042294   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:52:03.042406   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:52:03.042505   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:52:03.042577   24995 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:52:03.042670   24995 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:52:03.042707   24995 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:52:03.042747   24995 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:52:03.042753   24995 kubeadm.go:310] 
	I0923 10:52:03.042801   24995 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:52:03.042807   24995 kubeadm.go:310] 
	I0923 10:52:03.042880   24995 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:52:03.042886   24995 kubeadm.go:310] 
	I0923 10:52:03.042910   24995 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:52:03.042960   24995 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:52:03.043006   24995 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:52:03.043012   24995 kubeadm.go:310] 
	I0923 10:52:03.043055   24995 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:52:03.043062   24995 kubeadm.go:310] 
	I0923 10:52:03.043106   24995 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:52:03.043112   24995 kubeadm.go:310] 
	I0923 10:52:03.043171   24995 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:52:03.043244   24995 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:52:03.043303   24995 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:52:03.043309   24995 kubeadm.go:310] 
	I0923 10:52:03.043383   24995 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:52:03.043484   24995 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:52:03.043504   24995 kubeadm.go:310] 
	I0923 10:52:03.043608   24995 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xsoed2.p6r9ib7q4k96hg0w \
	I0923 10:52:03.043699   24995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 \
	I0923 10:52:03.043719   24995 kubeadm.go:310] 	--control-plane 
	I0923 10:52:03.043725   24995 kubeadm.go:310] 
	I0923 10:52:03.043823   24995 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:52:03.043833   24995 kubeadm.go:310] 
	I0923 10:52:03.043941   24995 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xsoed2.p6r9ib7q4k96hg0w \
	I0923 10:52:03.044037   24995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 
	I0923 10:52:03.044047   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:52:03.044054   24995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 10:52:03.045502   24995 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 10:52:03.046832   24995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 10:52:03.052467   24995 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 10:52:03.052487   24995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 10:52:03.076247   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 10:52:03.444143   24995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:52:03.444243   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:03.444282   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780 minikube.k8s.io/updated_at=2024_09_23T10_52_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=true
	I0923 10:52:03.495007   24995 ops.go:34] apiserver oom_adj: -16
	I0923 10:52:03.592144   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:04.092654   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:04.592338   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:05.092806   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:05.592594   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:06.092195   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:06.201502   24995 kubeadm.go:1113] duration metric: took 2.757318832s to wait for elevateKubeSystemPrivileges
	I0923 10:52:06.201546   24995 kubeadm.go:394] duration metric: took 14.124531532s to StartCluster
	I0923 10:52:06.201569   24995 settings.go:142] acquiring lock: {Name:mka0fc37129eef8f35af2c1a6ddc567156410b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:06.201664   24995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:52:06.202567   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/kubeconfig: {Name:mk40a9897a5577a89be748f874c2066abd769fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:06.202810   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:52:06.202807   24995 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:06.202841   24995 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 10:52:06.202900   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:52:06.202929   24995 addons.go:69] Setting storage-provisioner=true in profile "ha-790780"
	I0923 10:52:06.202937   24995 addons.go:69] Setting default-storageclass=true in profile "ha-790780"
	I0923 10:52:06.202954   24995 addons.go:234] Setting addon storage-provisioner=true in "ha-790780"
	I0923 10:52:06.202961   24995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-790780"
	I0923 10:52:06.202988   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:06.203012   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:06.203296   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.203334   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.203433   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.203475   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.218688   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0923 10:52:06.218748   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42755
	I0923 10:52:06.219240   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.219291   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.219815   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.219816   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.219840   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.219858   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.220231   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.220235   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.220427   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.220753   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.220795   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.222626   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:52:06.222971   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 10:52:06.223539   24995 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 10:52:06.223901   24995 addons.go:234] Setting addon default-storageclass=true in "ha-790780"
	I0923 10:52:06.223946   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:06.224319   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.224365   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.236739   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0923 10:52:06.237265   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.237749   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.237769   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.238124   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.238287   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.238667   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
	I0923 10:52:06.239113   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.239656   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.239679   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.239955   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.239993   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:06.240401   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.240443   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.241840   24995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:52:06.243145   24995 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:52:06.243160   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:52:06.243172   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:06.246249   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.246639   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:06.246666   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.246813   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:06.246982   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:06.247123   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:06.247259   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:06.256004   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0923 10:52:06.256499   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.256973   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.256999   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.257343   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.257522   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.259210   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:06.259387   24995 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:52:06.259399   24995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:52:06.259412   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:06.262267   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.262666   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:06.262687   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.262832   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:06.262990   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:06.263138   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:06.263273   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:06.304503   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:52:06.398460   24995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:52:06.446811   24995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:52:06.632495   24995 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 10:52:06.919542   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919563   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919636   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919658   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919873   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.919902   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.919910   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.919919   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919926   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919965   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.920081   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920099   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920119   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920133   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920197   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.920208   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.920378   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920390   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920407   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.920451   24995 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 10:52:06.920471   24995 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 10:52:06.920600   24995 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 10:52:06.920610   24995 round_trippers.go:469] Request Headers:
	I0923 10:52:06.920623   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:52:06.920629   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:52:06.937923   24995 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0923 10:52:06.938595   24995 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 10:52:06.938612   24995 round_trippers.go:469] Request Headers:
	I0923 10:52:06.938621   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:52:06.938629   24995 round_trippers.go:473]     Content-Type: application/json
	I0923 10:52:06.938632   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:52:06.947896   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:52:06.948322   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.948337   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.948594   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.948617   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.948630   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.950152   24995 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 10:52:06.951554   24995 addons.go:510] duration metric: took 748.719933ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 10:52:06.951590   24995 start.go:246] waiting for cluster config update ...
	I0923 10:52:06.951605   24995 start.go:255] writing updated cluster config ...
	I0923 10:52:06.953365   24995 out.go:201] 
	I0923 10:52:06.954972   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:06.955040   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:06.956615   24995 out.go:177] * Starting "ha-790780-m02" control-plane node in "ha-790780" cluster
	I0923 10:52:06.957684   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:52:06.957708   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:52:06.957808   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:52:06.957819   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:52:06.957884   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:06.958050   24995 start.go:360] acquireMachinesLock for ha-790780-m02: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:52:06.958105   24995 start.go:364] duration metric: took 32.264µs to acquireMachinesLock for "ha-790780-m02"
	I0923 10:52:06.958126   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:06.958191   24995 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0923 10:52:06.959878   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:52:06.959980   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.960026   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.976035   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0923 10:52:06.976582   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.977118   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.977143   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.977540   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.977757   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:06.977903   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:06.978091   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:52:06.978121   24995 client.go:168] LocalClient.Create starting
	I0923 10:52:06.978164   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:52:06.978206   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:52:06.978227   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:52:06.978286   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:52:06.978303   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:52:06.978310   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:52:06.978324   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:52:06.978329   24995 main.go:141] libmachine: (ha-790780-m02) Calling .PreCreateCheck
	I0923 10:52:06.978542   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:06.978925   24995 main.go:141] libmachine: Creating machine...
	I0923 10:52:06.978941   24995 main.go:141] libmachine: (ha-790780-m02) Calling .Create
	I0923 10:52:06.979102   24995 main.go:141] libmachine: (ha-790780-m02) Creating KVM machine...
	I0923 10:52:06.980456   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found existing default KVM network
	I0923 10:52:06.980575   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found existing private KVM network mk-ha-790780
	I0923 10:52:06.980736   24995 main.go:141] libmachine: (ha-790780-m02) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 ...
	I0923 10:52:06.980762   24995 main.go:141] libmachine: (ha-790780-m02) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:52:06.980809   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:06.980717   25359 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:52:06.980894   24995 main.go:141] libmachine: (ha-790780-m02) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:52:07.232203   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.232068   25359 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa...
	I0923 10:52:07.333393   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.333263   25359 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/ha-790780-m02.rawdisk...
	I0923 10:52:07.333421   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Writing magic tar header
	I0923 10:52:07.333438   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Writing SSH key tar header
	I0923 10:52:07.333446   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.333398   25359 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 ...
	I0923 10:52:07.333511   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02
	I0923 10:52:07.333532   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:52:07.333540   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 (perms=drwx------)
	I0923 10:52:07.333557   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:52:07.333571   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:52:07.333582   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:52:07.333598   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:52:07.333609   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:52:07.333623   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:52:07.333638   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:52:07.333647   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:52:07.333658   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:52:07.333669   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home
	I0923 10:52:07.333679   24995 main.go:141] libmachine: (ha-790780-m02) Creating domain...
	I0923 10:52:07.333718   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Skipping /home - not owner
	I0923 10:52:07.334599   24995 main.go:141] libmachine: (ha-790780-m02) define libvirt domain using xml: 
	I0923 10:52:07.334622   24995 main.go:141] libmachine: (ha-790780-m02) <domain type='kvm'>
	I0923 10:52:07.334660   24995 main.go:141] libmachine: (ha-790780-m02)   <name>ha-790780-m02</name>
	I0923 10:52:07.334682   24995 main.go:141] libmachine: (ha-790780-m02)   <memory unit='MiB'>2200</memory>
	I0923 10:52:07.334692   24995 main.go:141] libmachine: (ha-790780-m02)   <vcpu>2</vcpu>
	I0923 10:52:07.334705   24995 main.go:141] libmachine: (ha-790780-m02)   <features>
	I0923 10:52:07.334717   24995 main.go:141] libmachine: (ha-790780-m02)     <acpi/>
	I0923 10:52:07.334724   24995 main.go:141] libmachine: (ha-790780-m02)     <apic/>
	I0923 10:52:07.334732   24995 main.go:141] libmachine: (ha-790780-m02)     <pae/>
	I0923 10:52:07.334741   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.334753   24995 main.go:141] libmachine: (ha-790780-m02)   </features>
	I0923 10:52:07.334764   24995 main.go:141] libmachine: (ha-790780-m02)   <cpu mode='host-passthrough'>
	I0923 10:52:07.334772   24995 main.go:141] libmachine: (ha-790780-m02)   
	I0923 10:52:07.334781   24995 main.go:141] libmachine: (ha-790780-m02)   </cpu>
	I0923 10:52:07.334789   24995 main.go:141] libmachine: (ha-790780-m02)   <os>
	I0923 10:52:07.334798   24995 main.go:141] libmachine: (ha-790780-m02)     <type>hvm</type>
	I0923 10:52:07.334807   24995 main.go:141] libmachine: (ha-790780-m02)     <boot dev='cdrom'/>
	I0923 10:52:07.334816   24995 main.go:141] libmachine: (ha-790780-m02)     <boot dev='hd'/>
	I0923 10:52:07.334823   24995 main.go:141] libmachine: (ha-790780-m02)     <bootmenu enable='no'/>
	I0923 10:52:07.334834   24995 main.go:141] libmachine: (ha-790780-m02)   </os>
	I0923 10:52:07.334842   24995 main.go:141] libmachine: (ha-790780-m02)   <devices>
	I0923 10:52:07.334853   24995 main.go:141] libmachine: (ha-790780-m02)     <disk type='file' device='cdrom'>
	I0923 10:52:07.334882   24995 main.go:141] libmachine: (ha-790780-m02)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/boot2docker.iso'/>
	I0923 10:52:07.334904   24995 main.go:141] libmachine: (ha-790780-m02)       <target dev='hdc' bus='scsi'/>
	I0923 10:52:07.334913   24995 main.go:141] libmachine: (ha-790780-m02)       <readonly/>
	I0923 10:52:07.334923   24995 main.go:141] libmachine: (ha-790780-m02)     </disk>
	I0923 10:52:07.334932   24995 main.go:141] libmachine: (ha-790780-m02)     <disk type='file' device='disk'>
	I0923 10:52:07.334946   24995 main.go:141] libmachine: (ha-790780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:52:07.334959   24995 main.go:141] libmachine: (ha-790780-m02)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/ha-790780-m02.rawdisk'/>
	I0923 10:52:07.334968   24995 main.go:141] libmachine: (ha-790780-m02)       <target dev='hda' bus='virtio'/>
	I0923 10:52:07.334978   24995 main.go:141] libmachine: (ha-790780-m02)     </disk>
	I0923 10:52:07.334987   24995 main.go:141] libmachine: (ha-790780-m02)     <interface type='network'>
	I0923 10:52:07.334997   24995 main.go:141] libmachine: (ha-790780-m02)       <source network='mk-ha-790780'/>
	I0923 10:52:07.335007   24995 main.go:141] libmachine: (ha-790780-m02)       <model type='virtio'/>
	I0923 10:52:07.335023   24995 main.go:141] libmachine: (ha-790780-m02)     </interface>
	I0923 10:52:07.335035   24995 main.go:141] libmachine: (ha-790780-m02)     <interface type='network'>
	I0923 10:52:07.335044   24995 main.go:141] libmachine: (ha-790780-m02)       <source network='default'/>
	I0923 10:52:07.335058   24995 main.go:141] libmachine: (ha-790780-m02)       <model type='virtio'/>
	I0923 10:52:07.335109   24995 main.go:141] libmachine: (ha-790780-m02)     </interface>
	I0923 10:52:07.335132   24995 main.go:141] libmachine: (ha-790780-m02)     <serial type='pty'>
	I0923 10:52:07.335143   24995 main.go:141] libmachine: (ha-790780-m02)       <target port='0'/>
	I0923 10:52:07.335158   24995 main.go:141] libmachine: (ha-790780-m02)     </serial>
	I0923 10:52:07.335174   24995 main.go:141] libmachine: (ha-790780-m02)     <console type='pty'>
	I0923 10:52:07.335192   24995 main.go:141] libmachine: (ha-790780-m02)       <target type='serial' port='0'/>
	I0923 10:52:07.335204   24995 main.go:141] libmachine: (ha-790780-m02)     </console>
	I0923 10:52:07.335212   24995 main.go:141] libmachine: (ha-790780-m02)     <rng model='virtio'>
	I0923 10:52:07.335225   24995 main.go:141] libmachine: (ha-790780-m02)       <backend model='random'>/dev/random</backend>
	I0923 10:52:07.335234   24995 main.go:141] libmachine: (ha-790780-m02)     </rng>
	I0923 10:52:07.335249   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.335266   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.335277   24995 main.go:141] libmachine: (ha-790780-m02)   </devices>
	I0923 10:52:07.335286   24995 main.go:141] libmachine: (ha-790780-m02) </domain>
	I0923 10:52:07.335295   24995 main.go:141] libmachine: (ha-790780-m02) 
	I0923 10:52:07.341524   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:71:94:5b in network default
	I0923 10:52:07.342077   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring networks are active...
	I0923 10:52:07.342095   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:07.342878   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring network default is active
	I0923 10:52:07.343243   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring network mk-ha-790780 is active
	I0923 10:52:07.343596   24995 main.go:141] libmachine: (ha-790780-m02) Getting domain xml...
	I0923 10:52:07.344221   24995 main.go:141] libmachine: (ha-790780-m02) Creating domain...
	I0923 10:52:08.567103   24995 main.go:141] libmachine: (ha-790780-m02) Waiting to get IP...
	I0923 10:52:08.567991   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:08.568397   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:08.568451   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:08.568387   25359 retry.go:31] will retry after 271.175765ms: waiting for machine to come up
	I0923 10:52:08.840977   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:08.841448   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:08.841471   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:08.841414   25359 retry.go:31] will retry after 362.305584ms: waiting for machine to come up
	I0923 10:52:09.205937   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:09.206493   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:09.206603   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:09.206454   25359 retry.go:31] will retry after 321.793905ms: waiting for machine to come up
	I0923 10:52:09.529876   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:09.530376   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:09.530401   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:09.530327   25359 retry.go:31] will retry after 559.183772ms: waiting for machine to come up
	I0923 10:52:10.091098   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:10.091500   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:10.091524   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:10.091457   25359 retry.go:31] will retry after 578.148121ms: waiting for machine to come up
	I0923 10:52:10.671087   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:10.671615   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:10.671645   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:10.671580   25359 retry.go:31] will retry after 633.076035ms: waiting for machine to come up
	I0923 10:52:11.306241   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:11.306681   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:11.306701   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:11.306639   25359 retry.go:31] will retry after 1.109332207s: waiting for machine to come up
	I0923 10:52:12.417432   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:12.417916   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:12.417942   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:12.417872   25359 retry.go:31] will retry after 1.294744351s: waiting for machine to come up
	I0923 10:52:13.713819   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:13.714303   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:13.714329   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:13.714250   25359 retry.go:31] will retry after 1.531952439s: waiting for machine to come up
	I0923 10:52:15.247542   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:15.248025   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:15.248057   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:15.247975   25359 retry.go:31] will retry after 1.941306258s: waiting for machine to come up
	I0923 10:52:17.190839   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:17.191321   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:17.191351   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:17.191284   25359 retry.go:31] will retry after 2.353774872s: waiting for machine to come up
	I0923 10:52:19.546668   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:19.547031   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:19.547055   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:19.546983   25359 retry.go:31] will retry after 2.747965423s: waiting for machine to come up
	I0923 10:52:22.297443   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:22.297864   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:22.297889   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:22.297821   25359 retry.go:31] will retry after 4.500988279s: waiting for machine to come up
	I0923 10:52:26.799947   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:26.800373   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:26.800398   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:26.800337   25359 retry.go:31] will retry after 3.653543746s: waiting for machine to come up
	I0923 10:52:30.458551   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.459044   24995 main.go:141] libmachine: (ha-790780-m02) Found IP for machine: 192.168.39.43
	I0923 10:52:30.459067   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.459075   24995 main.go:141] libmachine: (ha-790780-m02) Reserving static IP address...
	I0923 10:52:30.459483   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find host DHCP lease matching {name: "ha-790780-m02", mac: "52:54:00:6f:fc:60", ip: "192.168.39.43"} in network mk-ha-790780
	I0923 10:52:30.533257   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Getting to WaitForSSH function...
	I0923 10:52:30.533288   24995 main.go:141] libmachine: (ha-790780-m02) Reserved static IP address: 192.168.39.43
	I0923 10:52:30.533301   24995 main.go:141] libmachine: (ha-790780-m02) Waiting for SSH to be available...
	I0923 10:52:30.536138   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.536313   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780
	I0923 10:52:30.536335   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find defined IP address of network mk-ha-790780 interface with MAC address 52:54:00:6f:fc:60
	I0923 10:52:30.536505   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH client type: external
	I0923 10:52:30.536532   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa (-rw-------)
	I0923 10:52:30.536568   24995 main.go:141] libmachine: (ha-790780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:52:30.536590   24995 main.go:141] libmachine: (ha-790780-m02) DBG | About to run SSH command:
	I0923 10:52:30.536606   24995 main.go:141] libmachine: (ha-790780-m02) DBG | exit 0
	I0923 10:52:30.540119   24995 main.go:141] libmachine: (ha-790780-m02) DBG | SSH cmd err, output: exit status 255: 
	I0923 10:52:30.540140   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0923 10:52:30.540147   24995 main.go:141] libmachine: (ha-790780-m02) DBG | command : exit 0
	I0923 10:52:30.540151   24995 main.go:141] libmachine: (ha-790780-m02) DBG | err     : exit status 255
	I0923 10:52:30.540162   24995 main.go:141] libmachine: (ha-790780-m02) DBG | output  : 
	I0923 10:52:33.541623   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Getting to WaitForSSH function...
	I0923 10:52:33.544182   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.544547   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.544574   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.544757   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH client type: external
	I0923 10:52:33.544784   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa (-rw-------)
	I0923 10:52:33.544814   24995 main.go:141] libmachine: (ha-790780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:52:33.544831   24995 main.go:141] libmachine: (ha-790780-m02) DBG | About to run SSH command:
	I0923 10:52:33.544854   24995 main.go:141] libmachine: (ha-790780-m02) DBG | exit 0
	I0923 10:52:33.669504   24995 main.go:141] libmachine: (ha-790780-m02) DBG | SSH cmd err, output: <nil>: 
	I0923 10:52:33.669774   24995 main.go:141] libmachine: (ha-790780-m02) KVM machine creation complete!
	I0923 10:52:33.670110   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:33.670656   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:33.670934   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:33.671133   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:52:33.671150   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetState
	I0923 10:52:33.672305   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:52:33.672319   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:52:33.672324   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:52:33.672329   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.674474   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.674819   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.674843   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.674997   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.675174   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.675328   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.675465   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.675610   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.675839   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.675852   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:52:33.776748   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:52:33.776774   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:52:33.776785   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.779405   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.779751   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.779783   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.779884   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.780088   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.780269   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.780419   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.780568   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.780760   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.780773   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:52:33.882210   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:52:33.882291   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:52:33.882305   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:52:33.882314   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:33.882575   24995 buildroot.go:166] provisioning hostname "ha-790780-m02"
	I0923 10:52:33.882600   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:33.882773   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.885308   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.885642   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.885677   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.885853   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.886030   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.886155   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.886300   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.886430   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.886626   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.886642   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780-m02 && echo "ha-790780-m02" | sudo tee /etc/hostname
	I0923 10:52:34.003577   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780-m02
	
	I0923 10:52:34.003598   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.006028   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.006433   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.006454   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.006632   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.006821   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.006980   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.007139   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.007310   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.007465   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.007480   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:52:34.118625   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:52:34.118662   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:52:34.118683   24995 buildroot.go:174] setting up certificates
	I0923 10:52:34.118696   24995 provision.go:84] configureAuth start
	I0923 10:52:34.118714   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:34.118982   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.121671   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.122010   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.122038   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.122133   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.124342   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.124650   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.124675   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.124825   24995 provision.go:143] copyHostCerts
	I0923 10:52:34.124854   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:52:34.124893   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:52:34.124906   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:52:34.124985   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:52:34.125072   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:52:34.125097   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:52:34.125107   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:52:34.125144   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:52:34.125212   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:52:34.125235   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:52:34.125242   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:52:34.125281   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:52:34.125349   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780-m02 san=[127.0.0.1 192.168.39.43 ha-790780-m02 localhost minikube]
	I0923 10:52:34.193891   24995 provision.go:177] copyRemoteCerts
	I0923 10:52:34.193957   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:52:34.193986   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.196570   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.196865   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.196889   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.197016   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.197136   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.197266   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.197369   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.281916   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:52:34.281976   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:52:34.308044   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:52:34.308105   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:52:34.333433   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:52:34.333520   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:52:34.360112   24995 provision.go:87] duration metric: took 241.398124ms to configureAuth
	I0923 10:52:34.360147   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:52:34.360368   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:34.360455   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.363054   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.363373   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.363404   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.363563   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.363803   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.363983   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.364144   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.364318   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.364480   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.364494   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:52:34.591141   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:52:34.591170   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:52:34.591177   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetURL
	I0923 10:52:34.592369   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using libvirt version 6000000
	I0923 10:52:34.594796   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.595094   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.595121   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.595270   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:52:34.595283   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:52:34.595290   24995 client.go:171] duration metric: took 27.617159251s to LocalClient.Create
	I0923 10:52:34.595315   24995 start.go:167] duration metric: took 27.61722609s to libmachine.API.Create "ha-790780"
	I0923 10:52:34.595328   24995 start.go:293] postStartSetup for "ha-790780-m02" (driver="kvm2")
	I0923 10:52:34.595341   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:52:34.595379   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.595602   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:52:34.595632   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.597589   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.597898   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.597926   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.598021   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.598195   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.598358   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.598520   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.684195   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:52:34.689242   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:52:34.689272   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:52:34.689348   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:52:34.689459   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:52:34.689471   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:52:34.689556   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:52:34.700320   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:52:34.725191   24995 start.go:296] duration metric: took 129.850231ms for postStartSetup
	I0923 10:52:34.725244   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:34.725799   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.728545   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.728886   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.728913   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.729093   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:34.729294   24995 start.go:128] duration metric: took 27.771090928s to createHost
	I0923 10:52:34.729314   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.731286   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.731644   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.731669   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.731823   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.731990   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.732151   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.732281   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.732440   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.732637   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.732658   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:52:34.834231   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088754.794402068
	
	I0923 10:52:34.834249   24995 fix.go:216] guest clock: 1727088754.794402068
	I0923 10:52:34.834255   24995 fix.go:229] Guest: 2024-09-23 10:52:34.794402068 +0000 UTC Remote: 2024-09-23 10:52:34.729306022 +0000 UTC m=+70.873098644 (delta=65.096046ms)
	I0923 10:52:34.834270   24995 fix.go:200] guest clock delta is within tolerance: 65.096046ms
	I0923 10:52:34.834274   24995 start.go:83] releasing machines lock for "ha-790780-m02", held for 27.876160912s
	I0923 10:52:34.834293   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.834511   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.837173   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.837494   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.837520   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.839594   24995 out.go:177] * Found network options:
	I0923 10:52:34.840920   24995 out.go:177]   - NO_PROXY=192.168.39.234
	W0923 10:52:34.842074   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:52:34.842099   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842612   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842764   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842853   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:52:34.842888   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	W0923 10:52:34.842903   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:52:34.842968   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:52:34.842983   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.845348   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845558   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845701   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.845723   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845847   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.845942   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.845969   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.846014   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.846122   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.846203   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.846268   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.846323   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.846389   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.846494   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:35.081176   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:52:35.087607   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:52:35.087663   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:52:35.103528   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:52:35.103555   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:52:35.103622   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:52:35.120834   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:52:35.135839   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:52:35.135902   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:52:35.150051   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:52:35.166191   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:52:35.300053   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:52:35.467434   24995 docker.go:233] disabling docker service ...
	I0923 10:52:35.467505   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:52:35.481901   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:52:35.494845   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:52:35.623420   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:52:35.753868   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:52:35.768422   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:52:35.787586   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:52:35.787649   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.799053   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:52:35.799126   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.810558   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.821594   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.832724   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:52:35.843898   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.855726   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.873592   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.884110   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:52:35.893791   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:52:35.893856   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:52:35.906807   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:52:35.916973   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:52:36.035527   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:52:36.128791   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:52:36.128861   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:52:36.133474   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:52:36.133527   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:52:36.137009   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:52:36.176502   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:52:36.176587   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:52:36.204178   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:52:36.234043   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:52:36.235621   24995 out.go:177]   - env NO_PROXY=192.168.39.234
	I0923 10:52:36.236738   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:36.239083   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:36.239451   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:36.239480   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:36.239678   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:52:36.243606   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:52:36.255882   24995 mustload.go:65] Loading cluster: ha-790780
	I0923 10:52:36.256081   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:36.256374   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:36.256416   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:36.270776   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0923 10:52:36.271240   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:36.271692   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:36.271718   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:36.271991   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:36.272238   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:36.273724   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:36.274034   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:36.274069   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:36.288288   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0923 10:52:36.288706   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:36.289138   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:36.289156   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:36.289414   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:36.289558   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:36.289677   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.43
	I0923 10:52:36.289688   24995 certs.go:194] generating shared ca certs ...
	I0923 10:52:36.289705   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.289819   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:52:36.289854   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:52:36.289863   24995 certs.go:256] generating profile certs ...
	I0923 10:52:36.289959   24995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:52:36.289984   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0
	I0923 10:52:36.289997   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.43 192.168.39.254]
	I0923 10:52:36.380163   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 ...
	I0923 10:52:36.380191   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0: {Name:mkcca314f563c49b9f271f2aa6db3e6f62b32cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.380347   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0 ...
	I0923 10:52:36.380359   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0: {Name:mkec241aeb6bb82c01cd41cf66da0be3a70fdccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.380434   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:52:36.380560   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:52:36.380681   24995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:52:36.380695   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:52:36.380707   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:52:36.380720   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:52:36.380735   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:52:36.380747   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:52:36.380759   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:52:36.380771   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:52:36.380783   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:52:36.380831   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:52:36.380860   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:52:36.380869   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:52:36.380891   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:52:36.380911   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:52:36.380932   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:52:36.380968   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:52:36.380992   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.381005   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.381017   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:52:36.381045   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:36.384036   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:36.384404   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:36.384430   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:36.384577   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:36.384750   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:36.384881   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:36.384987   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:36.457700   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 10:52:36.466345   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 10:52:36.478344   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 10:52:36.483561   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 10:52:36.494070   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 10:52:36.498527   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 10:52:36.509289   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 10:52:36.514499   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 10:52:36.524608   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 10:52:36.528591   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 10:52:36.538971   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 10:52:36.542839   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0923 10:52:36.553841   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:52:36.579371   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:52:36.604546   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:52:36.628677   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:52:36.653097   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 10:52:36.680685   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:52:36.705242   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:52:36.729370   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:52:36.752651   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:52:36.776422   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:52:36.799568   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:52:36.823834   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 10:52:36.840782   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 10:52:36.857346   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 10:52:36.873712   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 10:52:36.889839   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 10:52:36.905626   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0923 10:52:36.921660   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 10:52:36.938136   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:52:36.943716   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:52:36.953982   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.958476   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.958521   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.964147   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:52:36.974525   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:52:36.985437   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.989845   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.989893   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.995312   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:52:37.005409   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:52:37.015583   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.019922   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.019974   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.025448   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:52:37.035595   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:52:37.039362   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:52:37.039415   24995 kubeadm.go:934] updating node {m02 192.168.39.43 8443 v1.31.1 crio true true} ...
	I0923 10:52:37.039492   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:52:37.039513   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:52:37.039552   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:52:37.055529   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:52:37.055596   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:52:37.055650   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:52:37.065414   24995 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:52:37.065472   24995 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:52:37.075491   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:52:37.075506   24995 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0923 10:52:37.075520   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:52:37.075497   24995 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0923 10:52:37.075574   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:52:37.080294   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 10:52:37.080325   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:52:38.529041   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:52:38.529117   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:52:38.533986   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 10:52:38.534028   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:52:39.337289   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:52:39.353663   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:52:39.353773   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:52:39.358145   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 10:52:39.358182   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:52:39.672771   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 10:52:39.682637   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 10:52:39.699260   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:52:39.715572   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 10:52:39.732521   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:52:39.736488   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:52:39.748539   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:52:39.875794   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:52:39.893533   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:39.893887   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:39.893927   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:39.908489   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45729
	I0923 10:52:39.908913   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:39.909435   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:39.909466   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:39.909786   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:39.909988   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:39.910172   24995 start.go:317] joinCluster: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:52:39.910308   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 10:52:39.910342   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:39.913308   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:39.913748   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:39.913778   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:39.913955   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:39.914131   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:39.914260   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:39.914383   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:40.061073   24995 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:40.061122   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d9ei0t.d7gczbf91ghyxy4a --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I0923 10:53:01.101827   24995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d9ei0t.d7gczbf91ghyxy4a --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (21.040673445s)
	I0923 10:53:01.101877   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 10:53:01.765759   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780-m02 minikube.k8s.io/updated_at=2024_09_23T10_53_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=false
	I0923 10:53:01.907605   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-790780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 10:53:02.022219   24995 start.go:319] duration metric: took 22.112042939s to joinCluster
	I0923 10:53:02.022286   24995 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:02.022624   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:02.023699   24995 out.go:177] * Verifying Kubernetes components...
	I0923 10:53:02.024977   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:02.301994   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:53:02.355631   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:53:02.355833   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 10:53:02.355886   24995 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.234:8443
	I0923 10:53:02.356182   24995 node_ready.go:35] waiting up to 6m0s for node "ha-790780-m02" to be "Ready" ...
	I0923 10:53:02.356275   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:02.356282   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:02.356289   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:02.356293   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:02.365629   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:53:02.856673   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:02.856694   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:02.856703   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:02.856706   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:02.865889   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:53:03.356651   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:03.356671   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:03.356680   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:03.356687   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:03.363168   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:53:03.857045   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:03.857073   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:03.857084   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:03.857090   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:03.860890   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:04.356575   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:04.356597   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:04.356604   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:04.356608   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:04.359661   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:04.360223   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:04.856507   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:04.856529   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:04.856537   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:04.856540   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:04.860119   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:05.356700   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:05.356722   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:05.356728   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:05.356733   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:05.360476   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:05.856749   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:05.856773   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:05.856781   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:05.856784   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:05.860556   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:06.356805   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:06.356825   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:06.356833   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:06.356837   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:06.359991   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:06.361007   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:06.857386   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:06.857410   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:06.857422   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:06.857428   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:06.860894   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:07.357257   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:07.357281   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:07.357291   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:07.357296   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:07.361346   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:07.856430   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:07.856457   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:07.856468   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:07.856475   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:07.860130   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:08.357367   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:08.357402   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:08.357416   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:08.357422   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:08.360772   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:08.361285   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:08.856627   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:08.856648   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:08.856656   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:08.856661   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:08.860220   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:09.357037   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:09.357059   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:09.357070   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:09.357075   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:09.360298   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:09.857427   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:09.857457   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:09.857469   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:09.857474   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:09.860786   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:10.357151   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:10.357171   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:10.357180   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:10.357183   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:10.360916   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:10.362707   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:10.857145   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:10.857166   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:10.857174   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:10.857178   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:10.861809   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:11.356801   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:11.356822   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:11.356830   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:11.356834   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:11.360464   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:11.856414   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:11.856436   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:11.856447   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:11.856450   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:11.859649   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.357058   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:12.357081   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:12.357088   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:12.357092   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:12.361042   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.857390   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:12.857414   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:12.857424   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:12.857428   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:12.861016   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.861719   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:13.357113   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:13.357138   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:13.357150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:13.357155   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:13.360431   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:13.857223   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:13.857243   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:13.857251   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:13.857255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:13.860307   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:14.357308   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:14.357331   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:14.357339   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:14.357342   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:14.361127   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:14.856952   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:14.856977   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:14.856987   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:14.856992   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:14.860782   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:15.356456   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:15.356485   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:15.356496   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:15.356502   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:15.359792   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:15.360494   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:15.856872   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:15.856897   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:15.856907   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:15.856912   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:15.860634   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:16.356764   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:16.356786   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:16.356793   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:16.356798   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:16.360240   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:16.856427   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:16.856454   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:16.856466   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:16.856472   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:16.860397   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:17.356784   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:17.356806   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:17.356814   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:17.356819   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:17.360664   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:17.361536   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:17.856878   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:17.856902   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:17.856910   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:17.856915   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:17.860694   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:18.356716   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:18.356739   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:18.356746   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:18.356750   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:18.360583   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:18.856463   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:18.856487   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:18.856495   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:18.856502   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:18.860301   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:19.356990   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:19.357018   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:19.357028   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:19.357031   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:19.361547   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:19.362649   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:19.857046   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:19.857065   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:19.857073   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:19.857077   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:19.860596   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:20.357289   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:20.357312   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:20.357321   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:20.357326   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:20.361074   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:20.857154   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:20.857178   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:20.857186   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:20.857190   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:20.860563   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:21.357410   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.357434   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.357445   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.357449   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.362160   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:21.362767   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:21.857033   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.857057   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.857065   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.857071   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.860457   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:21.860908   24995 node_ready.go:49] node "ha-790780-m02" has status "Ready":"True"
	I0923 10:53:21.860928   24995 node_ready.go:38] duration metric: took 19.504727616s for node "ha-790780-m02" to be "Ready" ...
	I0923 10:53:21.860937   24995 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:53:21.861016   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:21.861026   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.861033   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.861037   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.865124   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:21.870946   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.871015   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bsbth
	I0923 10:53:21.871023   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.871030   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.871035   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.873727   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.874362   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.874375   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.874383   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.874386   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.876630   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.877063   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.877077   24995 pod_ready.go:82] duration metric: took 6.11171ms for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.877085   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.877131   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-vzhrs
	I0923 10:53:21.877139   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.877145   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.877148   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.879422   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.879947   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.879959   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.879966   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.879971   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.881756   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:53:21.882229   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.882243   24995 pod_ready.go:82] duration metric: took 5.151724ms for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.882250   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.882288   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780
	I0923 10:53:21.882295   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.882301   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.882305   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.884597   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.885566   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.885580   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.885587   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.885590   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.887691   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.888066   24995 pod_ready.go:93] pod "etcd-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.888081   24995 pod_ready.go:82] duration metric: took 5.825391ms for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.888088   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.888136   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m02
	I0923 10:53:21.888144   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.888150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.888154   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.890206   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.890675   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.890689   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.890699   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.890706   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.892638   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:53:21.892989   24995 pod_ready.go:93] pod "etcd-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.893005   24995 pod_ready.go:82] duration metric: took 4.911284ms for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.893019   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.057496   24995 request.go:632] Waited for 164.405368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:53:22.057558   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:53:22.057562   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.057569   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.057573   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.061586   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.257674   24995 request.go:632] Waited for 195.391664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:22.257753   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:22.257761   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.257768   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.257772   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.260869   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.261571   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:22.261592   24995 pod_ready.go:82] duration metric: took 368.566383ms for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.261602   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.457665   24995 request.go:632] Waited for 195.996413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:53:22.457743   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:53:22.457752   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.457762   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.457769   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.463274   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:53:22.657157   24995 request.go:632] Waited for 193.295869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:22.657236   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:22.657245   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.657255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.657261   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.661000   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.661818   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:22.661846   24995 pod_ready.go:82] duration metric: took 400.236588ms for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.661858   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.857792   24995 request.go:632] Waited for 195.86636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:53:22.857859   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:53:22.857865   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.857872   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.857878   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.861662   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.057689   24995 request.go:632] Waited for 195.383255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.057812   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.057824   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.057834   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.057838   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.061339   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.062080   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.062106   24995 pod_ready.go:82] duration metric: took 400.238848ms for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.062119   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.257074   24995 request.go:632] Waited for 194.846773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:53:23.257139   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:53:23.257144   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.257154   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.257159   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.261117   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.457215   24995 request.go:632] Waited for 195.281467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:23.457266   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:23.457271   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.457280   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.457285   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.460410   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.460927   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.460946   24995 pod_ready.go:82] duration metric: took 398.811897ms for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.460959   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.657058   24995 request.go:632] Waited for 196.030311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:53:23.657133   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:53:23.657142   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.657151   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.657160   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.660449   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.857439   24995 request.go:632] Waited for 196.364612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.857511   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.857517   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.857524   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.857528   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.861085   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.861628   24995 pod_ready.go:93] pod "kube-proxy-jqwtw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.861646   24995 pod_ready.go:82] duration metric: took 400.678998ms for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.861658   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.057696   24995 request.go:632] Waited for 195.97414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:53:24.057780   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:53:24.057788   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.057803   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.057811   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.061523   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.257819   24995 request.go:632] Waited for 195.359423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:24.257886   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:24.257891   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.257898   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.257903   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.260794   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:24.261474   24995 pod_ready.go:93] pod "kube-proxy-x8fb6" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:24.261495   24995 pod_ready.go:82] duration metric: took 399.829683ms for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.261504   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.457623   24995 request.go:632] Waited for 196.060511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:53:24.457720   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:53:24.457731   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.457743   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.457754   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.461018   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.657050   24995 request.go:632] Waited for 195.289482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:24.657104   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:24.657112   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.657119   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.657123   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.660508   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.661074   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:24.661111   24995 pod_ready.go:82] duration metric: took 399.600186ms for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.661130   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.857061   24995 request.go:632] Waited for 195.872756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:53:24.857130   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:53:24.857135   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.857142   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.857146   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.860206   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.057515   24995 request.go:632] Waited for 196.490026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:25.057567   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:25.057572   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.057579   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.057584   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.060963   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.061666   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:25.061685   24995 pod_ready.go:82] duration metric: took 400.549015ms for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:25.061695   24995 pod_ready.go:39] duration metric: took 3.200747429s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:53:25.061708   24995 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:53:25.061767   24995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:53:25.081513   24995 api_server.go:72] duration metric: took 23.059195196s to wait for apiserver process to appear ...
	I0923 10:53:25.081540   24995 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:53:25.081558   24995 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0923 10:53:25.085813   24995 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0923 10:53:25.085884   24995 round_trippers.go:463] GET https://192.168.39.234:8443/version
	I0923 10:53:25.085897   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.085907   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.085914   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.086702   24995 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 10:53:25.086786   24995 api_server.go:141] control plane version: v1.31.1
	I0923 10:53:25.086800   24995 api_server.go:131] duration metric: took 5.254846ms to wait for apiserver health ...
	I0923 10:53:25.086810   24995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:53:25.257145   24995 request.go:632] Waited for 170.272303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.257205   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.257212   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.257236   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.257246   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.262177   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.267069   24995 system_pods.go:59] 17 kube-system pods found
	I0923 10:53:25.267104   24995 system_pods.go:61] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:53:25.267110   24995 system_pods.go:61] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:53:25.267114   24995 system_pods.go:61] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:53:25.267119   24995 system_pods.go:61] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:53:25.267122   24995 system_pods.go:61] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:53:25.267125   24995 system_pods.go:61] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:53:25.267129   24995 system_pods.go:61] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:53:25.267132   24995 system_pods.go:61] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:53:25.267135   24995 system_pods.go:61] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:53:25.267139   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:53:25.267147   24995 system_pods.go:61] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:53:25.267153   24995 system_pods.go:61] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:53:25.267156   24995 system_pods.go:61] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:53:25.267159   24995 system_pods.go:61] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:53:25.267162   24995 system_pods.go:61] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:53:25.267165   24995 system_pods.go:61] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:53:25.267168   24995 system_pods.go:61] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:53:25.267174   24995 system_pods.go:74] duration metric: took 180.359181ms to wait for pod list to return data ...
	I0923 10:53:25.267183   24995 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:53:25.457458   24995 request.go:632] Waited for 190.183499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:53:25.457513   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:53:25.457518   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.457524   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.457529   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.461448   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.461660   24995 default_sa.go:45] found service account: "default"
	I0923 10:53:25.461673   24995 default_sa.go:55] duration metric: took 194.484894ms for default service account to be created ...
	I0923 10:53:25.461682   24995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:53:25.657106   24995 request.go:632] Waited for 195.349388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.657170   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.657177   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.657185   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.657189   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.661432   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.665847   24995 system_pods.go:86] 17 kube-system pods found
	I0923 10:53:25.665873   24995 system_pods.go:89] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:53:25.665880   24995 system_pods.go:89] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:53:25.665884   24995 system_pods.go:89] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:53:25.665888   24995 system_pods.go:89] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:53:25.665891   24995 system_pods.go:89] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:53:25.665895   24995 system_pods.go:89] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:53:25.665898   24995 system_pods.go:89] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:53:25.665902   24995 system_pods.go:89] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:53:25.665905   24995 system_pods.go:89] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:53:25.665909   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:53:25.665912   24995 system_pods.go:89] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:53:25.665915   24995 system_pods.go:89] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:53:25.665918   24995 system_pods.go:89] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:53:25.665922   24995 system_pods.go:89] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:53:25.665925   24995 system_pods.go:89] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:53:25.665928   24995 system_pods.go:89] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:53:25.665930   24995 system_pods.go:89] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:53:25.665936   24995 system_pods.go:126] duration metric: took 204.248587ms to wait for k8s-apps to be running ...
	I0923 10:53:25.665944   24995 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:53:25.665984   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:53:25.684789   24995 system_svc.go:56] duration metric: took 18.833844ms WaitForService to wait for kubelet
	I0923 10:53:25.684821   24995 kubeadm.go:582] duration metric: took 23.662507551s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:53:25.684838   24995 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:53:25.857256   24995 request.go:632] Waited for 172.290601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes
	I0923 10:53:25.857312   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes
	I0923 10:53:25.857319   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.857330   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.857337   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.861630   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.862368   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:53:25.862410   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:53:25.862427   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:53:25.862432   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:53:25.862438   24995 node_conditions.go:105] duration metric: took 177.594557ms to run NodePressure ...
	I0923 10:53:25.862459   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:53:25.862493   24995 start.go:255] writing updated cluster config ...
	I0923 10:53:25.865563   24995 out.go:201] 
	I0923 10:53:25.867057   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:25.867172   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:25.868777   24995 out.go:177] * Starting "ha-790780-m03" control-plane node in "ha-790780" cluster
	I0923 10:53:25.870020   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:53:25.870049   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:53:25.870173   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:53:25.870184   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:53:25.870283   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:25.870479   24995 start.go:360] acquireMachinesLock for ha-790780-m03: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:53:25.870521   24995 start.go:364] duration metric: took 24.387µs to acquireMachinesLock for "ha-790780-m03"
	I0923 10:53:25.870535   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:25.870632   24995 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0923 10:53:25.871978   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:53:25.872058   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:25.872097   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:25.887083   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0923 10:53:25.887502   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:25.887952   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:25.887969   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:25.888292   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:25.888496   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:25.888647   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:25.888772   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:53:25.888800   24995 client.go:168] LocalClient.Create starting
	I0923 10:53:25.888829   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:53:25.888863   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:53:25.888888   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:53:25.888936   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:53:25.888954   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:53:25.888964   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:53:25.888978   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:53:25.888986   24995 main.go:141] libmachine: (ha-790780-m03) Calling .PreCreateCheck
	I0923 10:53:25.889134   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:25.889504   24995 main.go:141] libmachine: Creating machine...
	I0923 10:53:25.889516   24995 main.go:141] libmachine: (ha-790780-m03) Calling .Create
	I0923 10:53:25.889669   24995 main.go:141] libmachine: (ha-790780-m03) Creating KVM machine...
	I0923 10:53:25.890855   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found existing default KVM network
	I0923 10:53:25.890969   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found existing private KVM network mk-ha-790780
	I0923 10:53:25.891095   24995 main.go:141] libmachine: (ha-790780-m03) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 ...
	I0923 10:53:25.891119   24995 main.go:141] libmachine: (ha-790780-m03) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:53:25.891198   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:25.891096   25778 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:53:25.891276   24995 main.go:141] libmachine: (ha-790780-m03) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:53:26.119663   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.119526   25778 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa...
	I0923 10:53:26.169862   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.169746   25778 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/ha-790780-m03.rawdisk...
	I0923 10:53:26.169897   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Writing magic tar header
	I0923 10:53:26.169907   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Writing SSH key tar header
	I0923 10:53:26.169915   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.169856   25778 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 ...
	I0923 10:53:26.169932   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03
	I0923 10:53:26.169988   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 (perms=drwx------)
	I0923 10:53:26.170004   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:53:26.170016   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:53:26.170030   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:53:26.170039   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:53:26.170046   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:53:26.170054   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:53:26.170064   24995 main.go:141] libmachine: (ha-790780-m03) Creating domain...
	I0923 10:53:26.170078   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:53:26.170094   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:53:26.170131   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:53:26.170142   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:53:26.170148   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home
	I0923 10:53:26.170153   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Skipping /home - not owner
	I0923 10:53:26.171065   24995 main.go:141] libmachine: (ha-790780-m03) define libvirt domain using xml: 
	I0923 10:53:26.171093   24995 main.go:141] libmachine: (ha-790780-m03) <domain type='kvm'>
	I0923 10:53:26.171101   24995 main.go:141] libmachine: (ha-790780-m03)   <name>ha-790780-m03</name>
	I0923 10:53:26.171112   24995 main.go:141] libmachine: (ha-790780-m03)   <memory unit='MiB'>2200</memory>
	I0923 10:53:26.171120   24995 main.go:141] libmachine: (ha-790780-m03)   <vcpu>2</vcpu>
	I0923 10:53:26.171126   24995 main.go:141] libmachine: (ha-790780-m03)   <features>
	I0923 10:53:26.171134   24995 main.go:141] libmachine: (ha-790780-m03)     <acpi/>
	I0923 10:53:26.171144   24995 main.go:141] libmachine: (ha-790780-m03)     <apic/>
	I0923 10:53:26.171152   24995 main.go:141] libmachine: (ha-790780-m03)     <pae/>
	I0923 10:53:26.171161   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171166   24995 main.go:141] libmachine: (ha-790780-m03)   </features>
	I0923 10:53:26.171171   24995 main.go:141] libmachine: (ha-790780-m03)   <cpu mode='host-passthrough'>
	I0923 10:53:26.171175   24995 main.go:141] libmachine: (ha-790780-m03)   
	I0923 10:53:26.171184   24995 main.go:141] libmachine: (ha-790780-m03)   </cpu>
	I0923 10:53:26.171200   24995 main.go:141] libmachine: (ha-790780-m03)   <os>
	I0923 10:53:26.171209   24995 main.go:141] libmachine: (ha-790780-m03)     <type>hvm</type>
	I0923 10:53:26.171218   24995 main.go:141] libmachine: (ha-790780-m03)     <boot dev='cdrom'/>
	I0923 10:53:26.171235   24995 main.go:141] libmachine: (ha-790780-m03)     <boot dev='hd'/>
	I0923 10:53:26.171247   24995 main.go:141] libmachine: (ha-790780-m03)     <bootmenu enable='no'/>
	I0923 10:53:26.171256   24995 main.go:141] libmachine: (ha-790780-m03)   </os>
	I0923 10:53:26.171264   24995 main.go:141] libmachine: (ha-790780-m03)   <devices>
	I0923 10:53:26.171272   24995 main.go:141] libmachine: (ha-790780-m03)     <disk type='file' device='cdrom'>
	I0923 10:53:26.171284   24995 main.go:141] libmachine: (ha-790780-m03)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/boot2docker.iso'/>
	I0923 10:53:26.171294   24995 main.go:141] libmachine: (ha-790780-m03)       <target dev='hdc' bus='scsi'/>
	I0923 10:53:26.171302   24995 main.go:141] libmachine: (ha-790780-m03)       <readonly/>
	I0923 10:53:26.171311   24995 main.go:141] libmachine: (ha-790780-m03)     </disk>
	I0923 10:53:26.171321   24995 main.go:141] libmachine: (ha-790780-m03)     <disk type='file' device='disk'>
	I0923 10:53:26.171336   24995 main.go:141] libmachine: (ha-790780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:53:26.171351   24995 main.go:141] libmachine: (ha-790780-m03)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/ha-790780-m03.rawdisk'/>
	I0923 10:53:26.171361   24995 main.go:141] libmachine: (ha-790780-m03)       <target dev='hda' bus='virtio'/>
	I0923 10:53:26.171367   24995 main.go:141] libmachine: (ha-790780-m03)     </disk>
	I0923 10:53:26.171378   24995 main.go:141] libmachine: (ha-790780-m03)     <interface type='network'>
	I0923 10:53:26.171390   24995 main.go:141] libmachine: (ha-790780-m03)       <source network='mk-ha-790780'/>
	I0923 10:53:26.171401   24995 main.go:141] libmachine: (ha-790780-m03)       <model type='virtio'/>
	I0923 10:53:26.171412   24995 main.go:141] libmachine: (ha-790780-m03)     </interface>
	I0923 10:53:26.171422   24995 main.go:141] libmachine: (ha-790780-m03)     <interface type='network'>
	I0923 10:53:26.171430   24995 main.go:141] libmachine: (ha-790780-m03)       <source network='default'/>
	I0923 10:53:26.171439   24995 main.go:141] libmachine: (ha-790780-m03)       <model type='virtio'/>
	I0923 10:53:26.171447   24995 main.go:141] libmachine: (ha-790780-m03)     </interface>
	I0923 10:53:26.171455   24995 main.go:141] libmachine: (ha-790780-m03)     <serial type='pty'>
	I0923 10:53:26.171462   24995 main.go:141] libmachine: (ha-790780-m03)       <target port='0'/>
	I0923 10:53:26.171471   24995 main.go:141] libmachine: (ha-790780-m03)     </serial>
	I0923 10:53:26.171479   24995 main.go:141] libmachine: (ha-790780-m03)     <console type='pty'>
	I0923 10:53:26.171490   24995 main.go:141] libmachine: (ha-790780-m03)       <target type='serial' port='0'/>
	I0923 10:53:26.171499   24995 main.go:141] libmachine: (ha-790780-m03)     </console>
	I0923 10:53:26.171508   24995 main.go:141] libmachine: (ha-790780-m03)     <rng model='virtio'>
	I0923 10:53:26.171518   24995 main.go:141] libmachine: (ha-790780-m03)       <backend model='random'>/dev/random</backend>
	I0923 10:53:26.171530   24995 main.go:141] libmachine: (ha-790780-m03)     </rng>
	I0923 10:53:26.171537   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171544   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171555   24995 main.go:141] libmachine: (ha-790780-m03)   </devices>
	I0923 10:53:26.171565   24995 main.go:141] libmachine: (ha-790780-m03) </domain>
	I0923 10:53:26.171575   24995 main.go:141] libmachine: (ha-790780-m03) 
	I0923 10:53:26.178380   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:72:76:7a in network default
	I0923 10:53:26.178970   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring networks are active...
	I0923 10:53:26.178994   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:26.179728   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring network default is active
	I0923 10:53:26.180047   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring network mk-ha-790780 is active
	I0923 10:53:26.180480   24995 main.go:141] libmachine: (ha-790780-m03) Getting domain xml...
	I0923 10:53:26.181303   24995 main.go:141] libmachine: (ha-790780-m03) Creating domain...
	I0923 10:53:27.415592   24995 main.go:141] libmachine: (ha-790780-m03) Waiting to get IP...
	I0923 10:53:27.416244   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:27.416680   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:27.416705   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:27.416654   25778 retry.go:31] will retry after 301.241192ms: waiting for machine to come up
	I0923 10:53:27.719304   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:27.719799   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:27.719822   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:27.719765   25778 retry.go:31] will retry after 352.048049ms: waiting for machine to come up
	I0923 10:53:28.073266   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.073729   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.073755   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.073678   25778 retry.go:31] will retry after 446.737236ms: waiting for machine to come up
	I0923 10:53:28.522311   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.522758   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.522785   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.522723   25778 retry.go:31] will retry after 430.883485ms: waiting for machine to come up
	I0923 10:53:28.955161   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.955610   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.955632   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.955571   25778 retry.go:31] will retry after 596.158416ms: waiting for machine to come up
	I0923 10:53:29.553342   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:29.553790   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:29.553817   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:29.553738   25778 retry.go:31] will retry after 730.070516ms: waiting for machine to come up
	I0923 10:53:30.285659   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:30.286131   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:30.286157   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:30.286040   25778 retry.go:31] will retry after 880.584916ms: waiting for machine to come up
	I0923 10:53:31.168589   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:31.169030   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:31.169056   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:31.168976   25778 retry.go:31] will retry after 1.090798092s: waiting for machine to come up
	I0923 10:53:32.261334   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:32.261824   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:32.261851   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:32.261785   25778 retry.go:31] will retry after 1.772470281s: waiting for machine to come up
	I0923 10:53:34.036802   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:34.037280   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:34.037304   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:34.037244   25778 retry.go:31] will retry after 2.114432637s: waiting for machine to come up
	I0923 10:53:36.153777   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:36.154260   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:36.154287   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:36.154219   25778 retry.go:31] will retry after 2.408325817s: waiting for machine to come up
	I0923 10:53:38.564571   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:38.565093   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:38.565130   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:38.565046   25778 retry.go:31] will retry after 2.326260729s: waiting for machine to come up
	I0923 10:53:40.892782   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:40.893136   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:40.893165   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:40.893117   25778 retry.go:31] will retry after 4.498444105s: waiting for machine to come up
	I0923 10:53:45.396707   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:45.397269   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:45.397291   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:45.397229   25778 retry.go:31] will retry after 3.781853522s: waiting for machine to come up
	I0923 10:53:49.183061   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.183495   24995 main.go:141] libmachine: (ha-790780-m03) Found IP for machine: 192.168.39.128
	I0923 10:53:49.183516   24995 main.go:141] libmachine: (ha-790780-m03) Reserving static IP address...
	I0923 10:53:49.183525   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has current primary IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.183927   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find host DHCP lease matching {name: "ha-790780-m03", mac: "52:54:00:da:88:d2", ip: "192.168.39.128"} in network mk-ha-790780
	I0923 10:53:49.254082   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Getting to WaitForSSH function...
	I0923 10:53:49.254113   24995 main.go:141] libmachine: (ha-790780-m03) Reserved static IP address: 192.168.39.128
	I0923 10:53:49.254149   24995 main.go:141] libmachine: (ha-790780-m03) Waiting for SSH to be available...
	I0923 10:53:49.256671   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.257072   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.257129   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.257268   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using SSH client type: external
	I0923 10:53:49.257291   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa (-rw-------)
	I0923 10:53:49.257308   24995 main.go:141] libmachine: (ha-790780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:53:49.257317   24995 main.go:141] libmachine: (ha-790780-m03) DBG | About to run SSH command:
	I0923 10:53:49.257331   24995 main.go:141] libmachine: (ha-790780-m03) DBG | exit 0
	I0923 10:53:49.381472   24995 main.go:141] libmachine: (ha-790780-m03) DBG | SSH cmd err, output: <nil>: 
	I0923 10:53:49.381777   24995 main.go:141] libmachine: (ha-790780-m03) KVM machine creation complete!
	I0923 10:53:49.382107   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:49.382695   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:49.382878   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:49.383011   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:53:49.383024   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetState
	I0923 10:53:49.384376   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:53:49.384391   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:53:49.384397   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:53:49.384405   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.386759   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.387147   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.387171   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.387306   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.387467   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.387589   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.387701   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.387847   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.388073   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.388086   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:53:49.488864   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:53:49.488884   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:53:49.488892   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.491596   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.491978   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.492008   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.492099   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.492277   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.492427   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.492526   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.492704   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.492876   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.492888   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:53:49.598720   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:53:49.598811   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:53:49.599353   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:53:49.599372   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.599616   24995 buildroot.go:166] provisioning hostname "ha-790780-m03"
	I0923 10:53:49.599639   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.599803   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.602122   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.602493   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.602532   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.602649   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.602826   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.602949   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.603164   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.603352   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.603516   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.603528   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780-m03 && echo "ha-790780-m03" | sudo tee /etc/hostname
	I0923 10:53:49.721012   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780-m03
	
	I0923 10:53:49.721052   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.723652   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.723993   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.724019   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.724168   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.724322   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.724468   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.724607   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.724760   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.724931   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.724946   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:53:49.840094   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:53:49.840118   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:53:49.840133   24995 buildroot.go:174] setting up certificates
	I0923 10:53:49.840143   24995 provision.go:84] configureAuth start
	I0923 10:53:49.840153   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.840425   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:49.842798   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.843203   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.843398   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.843425   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.846675   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.846978   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.847001   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.847165   24995 provision.go:143] copyHostCerts
	I0923 10:53:49.847199   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:53:49.847229   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:53:49.847237   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:53:49.847304   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:53:49.847373   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:53:49.847390   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:53:49.847395   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:53:49.847418   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:53:49.847462   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:53:49.847478   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:53:49.847484   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:53:49.847505   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:53:49.847551   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780-m03 san=[127.0.0.1 192.168.39.128 ha-790780-m03 localhost minikube]
	I0923 10:53:50.272155   24995 provision.go:177] copyRemoteCerts
	I0923 10:53:50.272213   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:53:50.272235   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.275051   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.275585   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.275610   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.275867   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.276099   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.276265   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.276390   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.359884   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:53:50.359964   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:53:50.385147   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:53:50.385241   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:53:50.408651   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:53:50.408716   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 10:53:50.435874   24995 provision.go:87] duration metric: took 595.718111ms to configureAuth
	I0923 10:53:50.435900   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:53:50.436094   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:50.436172   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.438683   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.439106   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.439127   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.439321   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.439488   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.439634   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.439746   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.439894   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:50.440051   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:50.440064   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:53:50.684672   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:53:50.684697   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:53:50.684703   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetURL
	I0923 10:53:50.686020   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using libvirt version 6000000
	I0923 10:53:50.688488   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.688853   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.688879   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.689108   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:53:50.689121   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:53:50.689127   24995 client.go:171] duration metric: took 24.800318648s to LocalClient.Create
	I0923 10:53:50.689151   24995 start.go:167] duration metric: took 24.800381017s to libmachine.API.Create "ha-790780"
	I0923 10:53:50.689159   24995 start.go:293] postStartSetup for "ha-790780-m03" (driver="kvm2")
	I0923 10:53:50.689169   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:53:50.689184   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.689440   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:53:50.689461   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.691514   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.691815   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.691839   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.692003   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.692169   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.692285   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.692465   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.777980   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:53:50.782722   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:53:50.782745   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:53:50.782841   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:53:50.782921   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:53:50.782934   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:53:50.783049   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:53:50.794032   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:53:50.818235   24995 start.go:296] duration metric: took 129.060416ms for postStartSetup
	I0923 10:53:50.818300   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:50.818861   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:50.821701   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.822078   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.822100   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.822411   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:50.822611   24995 start.go:128] duration metric: took 24.951969783s to createHost
	I0923 10:53:50.822632   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.824818   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.825087   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.825104   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.825227   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.825431   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.825587   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.825708   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.825886   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:50.826038   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:50.826050   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:53:50.930070   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088830.907721483
	
	I0923 10:53:50.930099   24995 fix.go:216] guest clock: 1727088830.907721483
	I0923 10:53:50.930110   24995 fix.go:229] Guest: 2024-09-23 10:53:50.907721483 +0000 UTC Remote: 2024-09-23 10:53:50.822622208 +0000 UTC m=+146.966414831 (delta=85.099275ms)
	I0923 10:53:50.930129   24995 fix.go:200] guest clock delta is within tolerance: 85.099275ms
	I0923 10:53:50.930136   24995 start.go:83] releasing machines lock for "ha-790780-m03", held for 25.059606586s
	I0923 10:53:50.930159   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.930413   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:50.933262   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.933632   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.933662   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.936077   24995 out.go:177] * Found network options:
	I0923 10:53:50.937456   24995 out.go:177]   - NO_PROXY=192.168.39.234,192.168.39.43
	W0923 10:53:50.938766   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 10:53:50.938786   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:53:50.938798   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939303   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939487   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939579   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:53:50.939619   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	W0923 10:53:50.939635   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 10:53:50.939651   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:53:50.939713   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:53:50.939736   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.942522   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.942765   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.942929   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.942950   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.943114   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.943237   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.943278   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.943281   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.943465   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.943491   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.943650   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.943653   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.944011   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.944170   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:51.179564   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:53:51.186418   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:53:51.186493   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:53:51.205433   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:53:51.205455   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:53:51.205519   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:53:51.225654   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:53:51.240061   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:53:51.240122   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:53:51.255040   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:53:51.270087   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:53:51.386340   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:53:51.551856   24995 docker.go:233] disabling docker service ...
	I0923 10:53:51.551936   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:53:51.566431   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:53:51.579646   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:53:51.704084   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:53:51.818925   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:53:51.833174   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:53:51.851230   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:53:51.851304   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.862780   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:53:51.862838   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.874053   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.884749   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.895370   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:53:51.906992   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.919902   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.938806   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.950285   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:53:51.960703   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:53:51.960774   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:53:51.975701   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:53:51.986268   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:52.107292   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:53:52.198777   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:53:52.198848   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:53:52.204135   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:53:52.204184   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:53:52.208403   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:53:52.251505   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:53:52.251599   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:53:52.282350   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:53:52.311799   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:53:52.313353   24995 out.go:177]   - env NO_PROXY=192.168.39.234
	I0923 10:53:52.314907   24995 out.go:177]   - env NO_PROXY=192.168.39.234,192.168.39.43
	I0923 10:53:52.316435   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:52.319158   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:52.319626   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:52.319654   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:52.319874   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:53:52.324605   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:53:52.339255   24995 mustload.go:65] Loading cluster: ha-790780
	I0923 10:53:52.339529   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:52.339777   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:52.339813   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:52.354195   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0923 10:53:52.354688   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:52.355182   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:52.355203   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:52.355538   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:52.355708   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:53:52.357205   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:53:52.357505   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:52.357542   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:52.372762   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0923 10:53:52.373235   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:52.373697   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:52.373716   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:52.374015   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:52.374212   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:53:52.374340   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.128
	I0923 10:53:52.374351   24995 certs.go:194] generating shared ca certs ...
	I0923 10:53:52.374369   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.374504   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:53:52.374556   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:53:52.374570   24995 certs.go:256] generating profile certs ...
	I0923 10:53:52.374655   24995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:53:52.374693   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6
	I0923 10:53:52.374713   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.43 192.168.39.128 192.168.39.254]
	I0923 10:53:52.830596   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 ...
	I0923 10:53:52.830630   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6: {Name:mk3da13c3de64b9df293631e361b2c7f1e18faef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.830809   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6 ...
	I0923 10:53:52.830824   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6: {Name:mk9b5e211aee3a00b4a3121b2b594883d08d2d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.830919   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:53:52.831074   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:53:52.831254   24995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:53:52.831273   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:53:52.831292   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:53:52.831307   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:53:52.831326   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:53:52.831343   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:53:52.831361   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:53:52.831377   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:53:52.845466   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:53:52.845553   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:53:52.845615   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:53:52.845628   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:53:52.845681   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:53:52.845720   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:53:52.845752   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:53:52.845808   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:53:52.845849   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:53:52.845870   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:52.845888   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:53:52.845975   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:53:52.849292   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:52.849803   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:53:52.849833   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:52.849989   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:53:52.850212   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:53:52.850363   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:53:52.850493   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:53:52.925695   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 10:53:52.931543   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 10:53:52.942513   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 10:53:52.947104   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 10:53:52.958388   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 10:53:52.963161   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 10:53:52.974344   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 10:53:52.978586   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 10:53:52.989199   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 10:53:52.993359   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 10:53:53.004532   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 10:53:53.009112   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0923 10:53:53.022998   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:53:53.048580   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:53:53.074022   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:53:53.099377   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:53:53.125775   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0923 10:53:53.149277   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:53:53.173416   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:53:53.196002   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:53:53.219585   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:53:53.244005   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:53:53.269483   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:53:53.294869   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 10:53:53.313037   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 10:53:53.331540   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 10:53:53.349167   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 10:53:53.365721   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 10:53:53.382590   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0923 10:53:53.399048   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 10:53:53.415691   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:53:53.421883   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:53:53.432913   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.437536   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.437594   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.443568   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:53:53.454559   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:53:53.466110   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.471977   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.472046   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.478758   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:53:53.490184   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:53:53.500924   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.505855   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.505903   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.511671   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:53:53.523484   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:53:53.527585   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:53:53.527642   24995 kubeadm.go:934] updating node {m03 192.168.39.128 8443 v1.31.1 crio true true} ...
	I0923 10:53:53.527721   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:53:53.527745   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:53:53.527775   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:53:53.547465   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:53:53.547540   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:53:53.547608   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:53:53.560380   24995 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:53:53.560453   24995 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:53:53.573111   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 10:53:53.573138   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 10:53:53.573159   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:53:53.573166   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:53:53.573188   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:53:53.573217   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:53:53.573226   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:53:53.573267   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:53:53.590633   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 10:53:53.590666   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:53:53.590676   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:53:53.590699   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 10:53:53.590727   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:53:53.590760   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:53:53.604722   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 10:53:53.604761   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:53:54.451748   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 10:53:54.462513   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 10:53:54.481654   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:53:54.498291   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 10:53:54.514964   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:53:54.519190   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:53:54.531635   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:54.654563   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:53:54.675941   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:53:54.676279   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:54.676323   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:54.693004   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I0923 10:53:54.693496   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:54.693939   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:54.693961   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:54.694293   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:54.694479   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:53:54.694626   24995 start.go:317] joinCluster: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:53:54.694743   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 10:53:54.694765   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:53:54.697460   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:54.697884   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:53:54.697912   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:54.698049   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:53:54.698201   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:53:54.698349   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:53:54.698455   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:53:54.854997   24995 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:54.855050   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hoy5xs.p8rtt9vlcudv8w5v --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m03 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443"
	I0923 10:54:17.634590   24995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hoy5xs.p8rtt9vlcudv8w5v --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m03 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443": (22.77951683s)
	I0923 10:54:17.634630   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 10:54:18.244633   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780-m03 minikube.k8s.io/updated_at=2024_09_23T10_54_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=false
	I0923 10:54:18.356200   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-790780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 10:54:18.464003   24995 start.go:319] duration metric: took 23.769370572s to joinCluster
	I0923 10:54:18.464065   24995 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:54:18.464405   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:54:18.465913   24995 out.go:177] * Verifying Kubernetes components...
	I0923 10:54:18.467412   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:54:18.756406   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:54:18.802392   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:54:18.802611   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 10:54:18.802663   24995 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.234:8443
	I0923 10:54:18.802852   24995 node_ready.go:35] waiting up to 6m0s for node "ha-790780-m03" to be "Ready" ...
	I0923 10:54:18.802919   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:18.802926   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:18.802933   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:18.802938   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:18.806473   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:19.303251   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:19.303278   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:19.303289   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:19.303297   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:19.306929   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:19.803053   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:19.803079   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:19.803087   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:19.803099   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:19.806552   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:20.303861   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:20.303887   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:20.303897   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:20.303903   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:20.307405   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:20.803113   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:20.803146   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:20.803154   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:20.803159   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:20.806146   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:20.806645   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:21.303931   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:21.303977   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:21.303989   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:21.303995   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:21.308047   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:21.803958   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:21.803978   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:21.803985   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:21.803991   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:21.807634   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:22.303112   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:22.303136   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:22.303146   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:22.303152   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:22.307111   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:22.803868   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:22.803900   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:22.803912   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:22.803918   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:22.809179   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:22.809796   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:23.303023   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:23.303042   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:23.303050   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:23.303054   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:23.306668   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:23.803788   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:23.803812   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:23.803824   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:23.803830   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:23.807293   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:24.303271   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:24.303300   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:24.303312   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:24.303319   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:24.306672   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:24.804050   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:24.804069   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:24.804078   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:24.804081   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:24.807683   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:25.303840   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:25.303859   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:25.303867   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:25.303871   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:25.306860   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:25.307495   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:25.803972   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:25.804004   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:25.804015   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:25.804020   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:25.809010   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:26.303324   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:26.303361   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:26.303373   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:26.303381   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:26.307038   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:26.803707   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:26.803726   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:26.803735   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:26.803740   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:26.807424   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:27.303612   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:27.303633   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:27.303641   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:27.303644   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:27.307111   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:27.307894   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:27.803014   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:27.803035   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:27.803042   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:27.803047   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:27.806595   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:28.303068   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:28.303091   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:28.303099   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:28.303103   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:28.306712   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:28.803340   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:28.803367   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:28.803378   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:28.803383   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:28.808838   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:29.303295   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:29.303316   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:29.303329   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:29.303334   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:29.306632   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:29.803768   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:29.803791   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:29.803799   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:29.803805   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:29.807177   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:29.807790   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:30.303713   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:30.303735   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:30.303747   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:30.303752   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:30.307209   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:30.803111   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:30.803133   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:30.803141   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:30.803149   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:30.806613   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:31.303325   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:31.303352   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:31.303371   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:31.303378   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:31.307177   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:31.803015   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:31.803038   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:31.803048   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:31.803056   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:31.806715   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:32.304018   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:32.304043   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:32.304053   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:32.304060   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:32.307932   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:32.308669   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:32.803891   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:32.803917   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:32.803926   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:32.803930   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:32.807307   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:33.303944   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:33.303964   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:33.303971   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:33.303975   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:33.307665   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:33.803624   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:33.803651   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:33.803662   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:33.803667   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:33.807257   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.303218   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:34.303244   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:34.303254   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:34.303260   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:34.306866   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.803306   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:34.803327   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:34.803334   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:34.803339   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:34.807098   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.807707   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:35.303220   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:35.303244   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:35.303255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:35.303261   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:35.306357   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:35.803279   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:35.803300   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:35.803308   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:35.803311   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:35.806322   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:36.303406   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:36.303426   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:36.303434   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:36.303437   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:36.307051   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:36.804001   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:36.804025   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:36.804032   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:36.804037   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:36.807873   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:36.808340   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:37.304023   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:37.304056   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.304068   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.304074   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.307139   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:37.803018   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:37.803040   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.803049   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.803053   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.806605   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:37.807211   24995 node_ready.go:49] node "ha-790780-m03" has status "Ready":"True"
	I0923 10:54:37.807228   24995 node_ready.go:38] duration metric: took 19.004361031s for node "ha-790780-m03" to be "Ready" ...
	I0923 10:54:37.807235   24995 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:54:37.807290   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:37.807299   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.807306   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.807314   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.813087   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:37.819930   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.820001   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bsbth
	I0923 10:54:37.820010   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.820017   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.820021   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.822941   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.823534   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.823553   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.823564   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.823569   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.826001   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.826517   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.826537   24995 pod_ready.go:82] duration metric: took 6.583104ms for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.826548   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.826607   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-vzhrs
	I0923 10:54:37.826617   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.826627   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.826638   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.829279   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.829843   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.829861   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.829871   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.829876   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.832424   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.832919   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.832933   24995 pod_ready.go:82] duration metric: took 6.374276ms for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.832941   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.832999   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780
	I0923 10:54:37.833006   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.833012   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.833019   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.835776   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.836388   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.836406   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.836415   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.836421   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.838742   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.839384   24995 pod_ready.go:93] pod "etcd-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.839400   24995 pod_ready.go:82] duration metric: took 6.450727ms for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.839411   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.839464   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m02
	I0923 10:54:37.839474   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.839484   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.839492   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.841917   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.842434   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:37.842448   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.842457   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.842463   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.844487   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.844973   24995 pod_ready.go:93] pod "etcd-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.844988   24995 pod_ready.go:82] duration metric: took 5.569102ms for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.844998   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.003469   24995 request.go:632] Waited for 158.377606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m03
	I0923 10:54:38.003538   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m03
	I0923 10:54:38.003546   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.003556   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.003563   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.007272   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.203213   24995 request.go:632] Waited for 195.30349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:38.203263   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:38.203268   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.203276   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.203283   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.206660   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.207358   24995 pod_ready.go:93] pod "etcd-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:38.207377   24995 pod_ready.go:82] duration metric: took 362.371698ms for pod "etcd-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.207393   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.403519   24995 request.go:632] Waited for 196.060085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:54:38.403591   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:54:38.403596   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.403604   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.403609   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.407248   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.603071   24995 request.go:632] Waited for 195.28673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:38.603162   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:38.603171   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.603185   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.603191   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.606368   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.606871   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:38.606889   24995 pod_ready.go:82] duration metric: took 399.489169ms for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.606901   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.803863   24995 request.go:632] Waited for 196.897276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:54:38.803951   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:54:38.803957   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.803965   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.803970   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.807324   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.003391   24995 request.go:632] Waited for 195.083674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:39.003447   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:39.003452   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.003459   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.003463   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.007170   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.007621   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.007637   24995 pod_ready.go:82] duration metric: took 400.728218ms for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.007646   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.203104   24995 request.go:632] Waited for 195.376867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m03
	I0923 10:54:39.203174   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m03
	I0923 10:54:39.203180   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.203191   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.203199   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.207195   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.403428   24995 request.go:632] Waited for 195.367448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:39.403481   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:39.403497   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.403514   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.403518   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.407467   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.408031   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.408055   24995 pod_ready.go:82] duration metric: took 400.401034ms for pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.408068   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.604073   24995 request.go:632] Waited for 195.932476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:54:39.604147   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:54:39.604155   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.604162   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.604171   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.607668   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.803638   24995 request.go:632] Waited for 195.213228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:39.803724   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:39.803735   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.803743   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.803746   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.807615   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.808349   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.808366   24995 pod_ready.go:82] duration metric: took 400.287089ms for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.808375   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.003824   24995 request.go:632] Waited for 195.387565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:54:40.003877   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:54:40.003882   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.003889   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.003899   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.007398   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.203651   24995 request.go:632] Waited for 195.36679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:40.203720   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:40.203725   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.203732   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.203735   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.207328   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.208124   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:40.208142   24995 pod_ready.go:82] duration metric: took 399.761139ms for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.208155   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.403086   24995 request.go:632] Waited for 194.869554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m03
	I0923 10:54:40.403150   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m03
	I0923 10:54:40.403167   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.403177   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.403187   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.407112   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.603302   24995 request.go:632] Waited for 195.339611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:40.603351   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:40.603356   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.603364   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.603368   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.606880   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.607541   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:40.607563   24995 pod_ready.go:82] duration metric: took 399.39886ms for pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.607574   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.803473   24995 request.go:632] Waited for 195.828576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:54:40.803528   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:54:40.803533   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.803540   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.803544   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.807602   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:41.003253   24995 request.go:632] Waited for 194.249655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:41.003339   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:41.003350   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.003359   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.003365   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.006586   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.007310   24995 pod_ready.go:93] pod "kube-proxy-jqwtw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.007329   24995 pod_ready.go:82] duration metric: took 399.74892ms for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.007339   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rqjzc" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.203496   24995 request.go:632] Waited for 196.092833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rqjzc
	I0923 10:54:41.203562   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rqjzc
	I0923 10:54:41.203567   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.203575   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.203578   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.207204   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.403851   24995 request.go:632] Waited for 195.767978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:41.403907   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:41.403914   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.403924   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.403934   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.407303   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.407822   24995 pod_ready.go:93] pod "kube-proxy-rqjzc" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.407837   24995 pod_ready.go:82] duration metric: took 400.492538ms for pod "kube-proxy-rqjzc" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.407846   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.604077   24995 request.go:632] Waited for 196.149981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:54:41.604138   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:54:41.604148   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.604169   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.604174   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.607470   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.803470   24995 request.go:632] Waited for 195.363139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:41.803568   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:41.803577   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.803599   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.803607   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.806928   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.807802   24995 pod_ready.go:93] pod "kube-proxy-x8fb6" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.807821   24995 pod_ready.go:82] duration metric: took 399.96783ms for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.807833   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.004033   24995 request.go:632] Waited for 196.111135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:54:42.004102   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:54:42.004132   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.004143   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.004163   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.007471   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.203462   24995 request.go:632] Waited for 195.3653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:42.203523   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:42.203530   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.203539   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.203542   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.207322   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.207956   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:42.207977   24995 pod_ready.go:82] duration metric: took 400.13764ms for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.207986   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.403868   24995 request.go:632] Waited for 195.812102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:54:42.403956   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:54:42.403968   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.403980   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.403990   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.407964   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.603132   24995 request.go:632] Waited for 194.291839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:42.603204   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:42.603209   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.603219   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.603225   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.607412   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:42.607957   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:42.607976   24995 pod_ready.go:82] duration metric: took 399.981007ms for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.607988   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.804082   24995 request.go:632] Waited for 196.014482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m03
	I0923 10:54:42.804138   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m03
	I0923 10:54:42.804143   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.804150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.804155   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.807740   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:43.003755   24995 request.go:632] Waited for 195.347939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:43.003855   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:43.003875   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.003887   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.003896   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.007973   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:43.009036   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:43.009058   24995 pod_ready.go:82] duration metric: took 401.061758ms for pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:43.009074   24995 pod_ready.go:39] duration metric: took 5.201827787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:54:43.009091   24995 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:54:43.009170   24995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:54:43.027664   24995 api_server.go:72] duration metric: took 24.563557521s to wait for apiserver process to appear ...
	I0923 10:54:43.027697   24995 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:54:43.027721   24995 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0923 10:54:43.032140   24995 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0923 10:54:43.032214   24995 round_trippers.go:463] GET https://192.168.39.234:8443/version
	I0923 10:54:43.032220   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.032231   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.032238   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.033668   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:54:43.033783   24995 api_server.go:141] control plane version: v1.31.1
	I0923 10:54:43.033805   24995 api_server.go:131] duration metric: took 6.10028ms to wait for apiserver health ...
	I0923 10:54:43.033815   24995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:54:43.204056   24995 request.go:632] Waited for 170.168573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.204125   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.204130   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.204140   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.204147   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.210512   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:54:43.216975   24995 system_pods.go:59] 24 kube-system pods found
	I0923 10:54:43.217008   24995 system_pods.go:61] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:54:43.217015   24995 system_pods.go:61] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:54:43.217020   24995 system_pods.go:61] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:54:43.217025   24995 system_pods.go:61] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:54:43.217030   24995 system_pods.go:61] "etcd-ha-790780-m03" [a8ba763b-e2c8-476f-b55d-3801a6ebfddc] Running
	I0923 10:54:43.217035   24995 system_pods.go:61] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:54:43.217039   24995 system_pods.go:61] "kindnet-lzbx6" [8323d5a3-9987-4d80-a510-9a5631283d3b] Running
	I0923 10:54:43.217046   24995 system_pods.go:61] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:54:43.217052   24995 system_pods.go:61] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:54:43.217060   24995 system_pods.go:61] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:54:43.217065   24995 system_pods.go:61] "kube-apiserver-ha-790780-m03" [3d5a7d3c-744c-4ada-90f3-6273d634bb4b] Running
	I0923 10:54:43.217073   24995 system_pods.go:61] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:54:43.217078   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:54:43.217086   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m03" [b317c61a-e51d-4a01-8591-7d447395bcb5] Running
	I0923 10:54:43.217094   24995 system_pods.go:61] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:54:43.217099   24995 system_pods.go:61] "kube-proxy-rqjzc" [ea0b4964-a74f-43f0-aebf-533661bc9537] Running
	I0923 10:54:43.217104   24995 system_pods.go:61] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:54:43.217109   24995 system_pods.go:61] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:54:43.217113   24995 system_pods.go:61] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:54:43.217118   24995 system_pods.go:61] "kube-scheduler-ha-790780-m03" [1c21e524-7e5a-4c74-97e6-04dd8d61ecbb] Running
	I0923 10:54:43.217124   24995 system_pods.go:61] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:54:43.217129   24995 system_pods.go:61] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:54:43.217137   24995 system_pods.go:61] "kube-vip-ha-790780-m03" [4336e409-5c78-4af0-8575-fe659435909a] Running
	I0923 10:54:43.217141   24995 system_pods.go:61] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:54:43.217150   24995 system_pods.go:74] duration metric: took 183.325652ms to wait for pod list to return data ...
	I0923 10:54:43.217162   24995 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:54:43.403603   24995 request.go:632] Waited for 186.357604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:54:43.403650   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:54:43.403671   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.403685   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.403692   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.408142   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:43.408270   24995 default_sa.go:45] found service account: "default"
	I0923 10:54:43.408289   24995 default_sa.go:55] duration metric: took 191.114244ms for default service account to be created ...
	I0923 10:54:43.408302   24995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:54:43.603624   24995 request.go:632] Waited for 195.240427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.603680   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.603685   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.603692   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.603698   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.609933   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:54:43.617043   24995 system_pods.go:86] 24 kube-system pods found
	I0923 10:54:43.617075   24995 system_pods.go:89] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:54:43.617081   24995 system_pods.go:89] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:54:43.617085   24995 system_pods.go:89] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:54:43.617089   24995 system_pods.go:89] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:54:43.617094   24995 system_pods.go:89] "etcd-ha-790780-m03" [a8ba763b-e2c8-476f-b55d-3801a6ebfddc] Running
	I0923 10:54:43.617098   24995 system_pods.go:89] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:54:43.617101   24995 system_pods.go:89] "kindnet-lzbx6" [8323d5a3-9987-4d80-a510-9a5631283d3b] Running
	I0923 10:54:43.617105   24995 system_pods.go:89] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:54:43.617108   24995 system_pods.go:89] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:54:43.617111   24995 system_pods.go:89] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:54:43.617115   24995 system_pods.go:89] "kube-apiserver-ha-790780-m03" [3d5a7d3c-744c-4ada-90f3-6273d634bb4b] Running
	I0923 10:54:43.617118   24995 system_pods.go:89] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:54:43.617123   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:54:43.617126   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m03" [b317c61a-e51d-4a01-8591-7d447395bcb5] Running
	I0923 10:54:43.617129   24995 system_pods.go:89] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:54:43.617132   24995 system_pods.go:89] "kube-proxy-rqjzc" [ea0b4964-a74f-43f0-aebf-533661bc9537] Running
	I0923 10:54:43.617136   24995 system_pods.go:89] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:54:43.617139   24995 system_pods.go:89] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:54:43.617142   24995 system_pods.go:89] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:54:43.617145   24995 system_pods.go:89] "kube-scheduler-ha-790780-m03" [1c21e524-7e5a-4c74-97e6-04dd8d61ecbb] Running
	I0923 10:54:43.617148   24995 system_pods.go:89] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:54:43.617151   24995 system_pods.go:89] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:54:43.617154   24995 system_pods.go:89] "kube-vip-ha-790780-m03" [4336e409-5c78-4af0-8575-fe659435909a] Running
	I0923 10:54:43.617157   24995 system_pods.go:89] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:54:43.617163   24995 system_pods.go:126] duration metric: took 208.855184ms to wait for k8s-apps to be running ...
	I0923 10:54:43.617173   24995 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:54:43.617217   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:54:43.635389   24995 system_svc.go:56] duration metric: took 18.194216ms WaitForService to wait for kubelet
	I0923 10:54:43.635423   24995 kubeadm.go:582] duration metric: took 25.171320686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:54:43.635447   24995 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:54:43.803841   24995 request.go:632] Waited for 168.315518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes
	I0923 10:54:43.803908   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes
	I0923 10:54:43.803913   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.803920   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.803924   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.807502   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:43.808531   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808553   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808564   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808567   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808571   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808574   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808579   24995 node_conditions.go:105] duration metric: took 173.125439ms to run NodePressure ...
	I0923 10:54:43.808592   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:54:43.808611   24995 start.go:255] writing updated cluster config ...
	I0923 10:54:43.808882   24995 ssh_runner.go:195] Run: rm -f paused
	I0923 10:54:43.860687   24995 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:54:43.862725   24995 out.go:177] * Done! kubectl is now configured to use "ha-790780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.017577715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089112017545367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=015ba0c7-7b52-4a45-ae48-eb0e17eafdce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.018115556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bb598cc-35b6-47c7-abcc-3a5140fb14af name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.018191662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bb598cc-35b6-47c7-abcc-3a5140fb14af name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.018517279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bb598cc-35b6-47c7-abcc-3a5140fb14af name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.057032109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ddc7d76-d87e-4760-af7c-5b7bb7f382b7 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.057105834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ddc7d76-d87e-4760-af7c-5b7bb7f382b7 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.058190934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c15c3eaa-9214-492e-bcb6-e8d2069cd5d4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.058666568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089112058636710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c15c3eaa-9214-492e-bcb6-e8d2069cd5d4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.059346895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b425ba5b-b958-466b-a283-d883a28941f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.059442403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b425ba5b-b958-466b-a283-d883a28941f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.059880441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b425ba5b-b958-466b-a283-d883a28941f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.100346406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4dffc57-cb76-453b-81d0-b95f26a78d91 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.100491321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4dffc57-cb76-453b-81d0-b95f26a78d91 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.101629453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d4856c2-7f22-4914-9bf3-ab31d411bbda name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.102242424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089112102220977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d4856c2-7f22-4914-9bf3-ab31d411bbda name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.103295385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91956e18-f461-46e0-8a89-9ab28ef90ebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.103416102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91956e18-f461-46e0-8a89-9ab28ef90ebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.103670764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91956e18-f461-46e0-8a89-9ab28ef90ebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.145063280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31d7f705-c3cd-4a67-97fb-8ff3c73824a9 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.145156234Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31d7f705-c3cd-4a67-97fb-8ff3c73824a9 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.146802498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48968278-a316-4f28-9ab2-9d27806acc96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.147230625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089112147209099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48968278-a316-4f28-9ab2-9d27806acc96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.147858422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3dde2923-1d86-447d-b674-b296271ff6b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.147943989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3dde2923-1d86-447d-b674-b296271ff6b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:32 ha-790780 crio[667]: time="2024-09-23 10:58:32.148269949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3dde2923-1d86-447d-b674-b296271ff6b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b6cdb320cb12       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   64b2fb317bf54       busybox-7dff88458-hmsb2
	fceea5af30884       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   7f70accb19994       coredns-7c65d6cfc9-vzhrs
	504391361e9f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e1bfaf7843489       storage-provisioner
	8f008021913ac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   61e4d18ef53ff       coredns-7c65d6cfc9-bsbth
	20dea9bfd7b93       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   12e4b7f578705       kube-proxy-jqwtw
	70e8cba43f15f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   a1aa2ae427e36       kindnet-5d9ww
	58d7d0f860c2c       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2b178d8dcf3ad       kube-vip-ha-790780
	579e069dd212e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   d632e3d4755d2       kube-scheduler-ha-790780
	4881d47948f52       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d65f8d57327b0       kube-controller-manager-ha-790780
	f13343b3ed39e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9e910662aa470       kube-apiserver-ha-790780
	621532bf94f06       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   cf20e920bbbdf       etcd-ha-790780
	
	
	==> coredns [8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927] <==
	[INFO] 10.244.1.2:59395 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000129294s
	[INFO] 10.244.1.2:33748 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00097443s
	[INFO] 10.244.0.4:46523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219823s
	[INFO] 10.244.2.2:35535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239865s
	[INFO] 10.244.2.2:36372 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017141396s
	[INFO] 10.244.2.2:50254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209403s
	[INFO] 10.244.1.2:48243 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198306s
	[INFO] 10.244.1.2:39091 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230366s
	[INFO] 10.244.1.2:49543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199975s
	[INFO] 10.244.0.4:45173 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102778s
	[INFO] 10.244.0.4:32836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736533s
	[INFO] 10.244.0.4:44659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129519s
	[INFO] 10.244.0.4:54433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098668s
	[INFO] 10.244.0.4:37772 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.2.2:43894 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134793s
	[INFO] 10.244.2.2:34604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147389s
	[INFO] 10.244.1.2:53532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242838s
	[INFO] 10.244.1.2:45804 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159901s
	[INFO] 10.244.1.2:39298 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112738s
	[INFO] 10.244.0.4:43692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093071s
	[INFO] 10.244.0.4:51414 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096722s
	[INFO] 10.244.2.2:56355 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295938s
	[INFO] 10.244.1.2:59520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142399s
	[INFO] 10.244.0.4:55347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090911s
	[INFO] 10.244.0.4:53926 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114353s
	
	
	==> coredns [fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc] <==
	[INFO] 10.244.2.2:49856 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000346472s
	[INFO] 10.244.2.2:58522 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173747s
	[INFO] 10.244.2.2:60029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181162s
	[INFO] 10.244.2.2:38618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184142s
	[INFO] 10.244.1.2:46063 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001758433s
	[INFO] 10.244.1.2:60295 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001402726s
	[INFO] 10.244.1.2:38240 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160236s
	[INFO] 10.244.1.2:41977 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113581s
	[INFO] 10.244.1.2:44892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133741s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105848s
	[INFO] 10.244.0.4:58776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144697s
	[INFO] 10.244.0.4:33311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001202009s
	[INFO] 10.244.2.2:57039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019058s
	[INFO] 10.244.2.2:57127 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153386s
	[INFO] 10.244.1.2:52843 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168874s
	[INFO] 10.244.0.4:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014121s
	[INFO] 10.244.0.4:38864 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079009s
	[INFO] 10.244.2.2:47502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158927s
	[INFO] 10.244.2.2:57106 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185408s
	[INFO] 10.244.2.2:34447 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139026s
	[INFO] 10.244.1.2:59976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015634s
	[INFO] 10.244.1.2:53446 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288738s
	[INFO] 10.244.1.2:52114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166821s
	[INFO] 10.244.0.4:54732 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099319s
	[INFO] 10.244.0.4:49290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071388s
	
	
	==> describe nodes <==
	Name:               ha-790780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_52_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-790780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4137f4910e0940f183cebcb2073b69b7
	  System UUID:                4137f491-0e09-40f1-83ce-bcb2073b69b7
	  Boot ID:                    d20b206f-6d12-4950-af76-836822976902
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmsb2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 coredns-7c65d6cfc9-bsbth             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 coredns-7c65d6cfc9-vzhrs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 etcd-ha-790780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m30s
	  kube-system                 kindnet-5d9ww                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m25s
	  kube-system                 kube-apiserver-ha-790780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-ha-790780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-proxy-jqwtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-scheduler-ha-790780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-vip-ha-790780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m23s  kube-proxy       
	  Normal  Starting                 6m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s  kubelet          Node ha-790780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s  kubelet          Node ha-790780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s  kubelet          Node ha-790780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m26s  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal  NodeReady                6m12s  kubelet          Node ha-790780 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal  RegisteredNode           4m9s   node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	
	
	Name:               ha-790780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_53_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:56:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-790780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87f6f3c7af44480934336376709a0c8
	  System UUID:                f87f6f3c-7af4-4480-9343-36376709a0c8
	  Boot ID:                    869cdc79-44fe-45ec-baeb-66b85d8eb577
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hdk9n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 etcd-ha-790780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-x2v9d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-790780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-790780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-x8fb6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-790780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-vip-ha-790780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node ha-790780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-790780-m02 status is now: NodeNotReady
	
	
	Name:               ha-790780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_54_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:54:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-790780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a2525d1b15b4365a533b4fbbc7d76d5
	  System UUID:                8a2525d1-b15b-4365-a533-b4fbbc7d76d5
	  Boot ID:                    a7b3ffe3-56b6-4c77-b8bb-b94fecea7ce9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2f4vm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 etcd-ha-790780-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m16s
	  kube-system                 kindnet-lzbx6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m17s
	  kube-system                 kube-apiserver-ha-790780-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-790780-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-rqjzc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-ha-790780-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-vip-ha-790780-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s (x8 over 4m18s)  kubelet          Node ha-790780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s (x7 over 4m18s)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	
	
	Name:               ha-790780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_55_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:55:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-790780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8bb8bb71d764d5397c864a970ca06f0
	  System UUID:                a8bb8bb7-1d76-4d53-97c8-64a970ca06f0
	  Boot ID:                    43fa98cd-88cb-492d-a6f8-c4d1f11bcb1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sz6cc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-58k4g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  Starting                 3m8s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m8s)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m8s)  kubelet          Node ha-790780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m8s)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  NodeReady                2m46s                kubelet          Node ha-790780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 10:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050514] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040290] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.807632] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.451360] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.609594] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.519719] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055679] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057192] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.186843] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114356] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.269409] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.949380] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.106869] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.060266] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 10:52] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.081963] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.787202] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.501695] kauditd_printk_skb: 41 callbacks suppressed
	[Sep23 10:53] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989] <==
	{"level":"warn","ts":"2024-09-23T10:58:32.423663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.427796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.439090Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.446102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.452984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.458026Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.469955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.470777Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.473091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.475830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.481442Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.487398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.493921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.496932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.500030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.504640Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.507106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.513724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.534339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.544603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.553669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.566872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.586653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.595876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:32.604814Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:58:32 up 7 min,  0 users,  load average: 0.27, 0.34, 0.17
	Linux ha-790780 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9] <==
	I0923 10:57:59.683870       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:09.674500       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:58:09.674559       1 main.go:299] handling current node
	I0923 10:58:09.674578       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:58:09.674587       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:58:09.674781       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:58:09.674808       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:09.674853       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:58:09.674859       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 10:58:19.676409       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:58:19.676470       1 main.go:299] handling current node
	I0923 10:58:19.676501       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:58:19.676506       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:58:19.676695       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:58:19.676726       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:19.676792       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:58:19.676813       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 10:58:29.683950       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:58:29.684192       1 main.go:299] handling current node
	I0923 10:58:29.684303       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:58:29.684447       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:58:29.685323       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:58:29.685472       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:29.685646       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:58:29.685828       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d] <==
	I0923 10:52:02.470272       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 10:52:02.487288       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 10:52:02.636999       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 10:52:06.966628       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 10:52:07.024027       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0923 10:54:15.771868       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.772121       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.642µs, panicked: false, err: <nil>, panic-reason: <nil>" logger="UnhandledError"
	E0923 10:54:15.773436       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.774650       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.775958       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.219249ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0923 10:54:50.840870       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42568: use of closed network connection
	E0923 10:54:51.046928       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42582: use of closed network connection
	E0923 10:54:51.239325       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42598: use of closed network connection
	E0923 10:54:51.469344       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42622: use of closed network connection
	E0923 10:54:51.662336       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42652: use of closed network connection
	E0923 10:54:51.840022       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42678: use of closed network connection
	E0923 10:54:52.023650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42708: use of closed network connection
	E0923 10:54:52.216046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42724: use of closed network connection
	E0923 10:54:52.402748       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42750: use of closed network connection
	E0923 10:54:52.693691       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42788: use of closed network connection
	E0923 10:54:52.868191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42814: use of closed network connection
	E0923 10:54:53.230910       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42838: use of closed network connection
	E0923 10:54:53.405713       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42860: use of closed network connection
	E0923 10:54:53.587256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42870: use of closed network connection
	W0923 10:56:21.308721       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.234]
	
	
	==> kube-controller-manager [4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d] <==
	I0923 10:55:25.124525       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-790780-m04" podCIDRs=["10.244.3.0/24"]
	I0923 10:55:25.124586       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.124620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.133509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.356496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.728032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:26.243588       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-790780-m04"
	I0923 10:55:26.283171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:27.507667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:27.553251       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:28.470149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:28.543154       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:35.178257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.206243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.206426       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-790780-m04"
	I0923 10:55:46.224292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.262261       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:55.382846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:56:46.290698       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-790780-m04"
	I0923 10:56:46.290858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:46.314933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:46.418190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.658083ms"
	I0923 10:56:46.418270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.621µs"
	I0923 10:56:48.568648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:51.466837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	
	
	==> kube-proxy [20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 10:52:09.262552       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 10:52:09.284499       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.234"]
	E0923 10:52:09.284588       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:52:09.317271       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 10:52:09.317394       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 10:52:09.317457       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:52:09.320801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:52:09.321989       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:52:09.322038       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:52:09.326499       1 config.go:199] "Starting service config controller"
	I0923 10:52:09.327483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:52:09.328524       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:52:09.328570       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:52:09.331934       1 config.go:328] "Starting node config controller"
	I0923 10:52:09.331976       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:52:09.428869       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:52:09.429192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:52:09.432816       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e] <==
	E0923 10:52:00.723488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:52:00.842918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:52:00.843015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:52:03.091035       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 10:54:44.751853       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8af6924d-0142-47f2-8cbe-927fbdaa50d7" pod="default/busybox-7dff88458-hdk9n" assumedNode="ha-790780-m02" currentNode="ha-790780-m03"
	E0923 10:54:44.780763       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hdk9n\": pod busybox-7dff88458-hdk9n is already assigned to node \"ha-790780-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hdk9n" node="ha-790780-m03"
	E0923 10:54:44.781985       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8af6924d-0142-47f2-8cbe-927fbdaa50d7(default/busybox-7dff88458-hdk9n) was assumed on ha-790780-m03 but assigned to ha-790780-m02" pod="default/busybox-7dff88458-hdk9n"
	E0923 10:54:44.782087       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hdk9n\": pod busybox-7dff88458-hdk9n is already assigned to node \"ha-790780-m02\"" pod="default/busybox-7dff88458-hdk9n"
	I0923 10:54:44.782173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hdk9n" node="ha-790780-m02"
	E0923 10:55:25.174653       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xmfxv\": pod kindnet-xmfxv is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xmfxv" node="ha-790780-m04"
	E0923 10:55:25.174983       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xmfxv\": pod kindnet-xmfxv is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-xmfxv"
	E0923 10:55:25.175545       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-58k4g\": pod kube-proxy-58k4g is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-58k4g" node="ha-790780-m04"
	E0923 10:55:25.178321       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-58k4g\": pod kube-proxy-58k4g is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-58k4g"
	E0923 10:55:25.223677       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.224053       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 143d16c9-72ab-4693-86a9-227280e3d88b(kube-system/kindnet-rhmrv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rhmrv"
	E0923 10:55:25.224238       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-rhmrv"
	I0923 10:55:25.224407       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.257675       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.257807       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 20bf7e97-ed43-402a-b267-4c1d2f4b5bbf(kube-system/kindnet-sz6cc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sz6cc"
	E0923 10:55:25.257863       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-sz6cc"
	I0923 10:55:25.257906       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.260301       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	E0923 10:55:25.260462       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6f2d4b5-c6d7-4f34-b81a-2644640ae3bb(kube-system/kube-proxy-ghvw7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvw7"
	E0923 10:55:25.260529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-ghvw7"
	I0923 10:55:25.260575       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	
	
	==> kubelet <==
	Sep 23 10:57:02 ha-790780 kubelet[1310]: E0923 10:57:02.752554    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089022751963172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:02 ha-790780 kubelet[1310]: E0923 10:57:02.752656    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089022751963172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:12 ha-790780 kubelet[1310]: E0923 10:57:12.759306    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089032758260960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:12 ha-790780 kubelet[1310]: E0923 10:57:12.759943    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089032758260960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:22 ha-790780 kubelet[1310]: E0923 10:57:22.761662    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089042761344235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:22 ha-790780 kubelet[1310]: E0923 10:57:22.761739    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089042761344235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:32 ha-790780 kubelet[1310]: E0923 10:57:32.763857    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089052763529781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:32 ha-790780 kubelet[1310]: E0923 10:57:32.763900    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089052763529781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:42 ha-790780 kubelet[1310]: E0923 10:57:42.767538    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089062766959170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:42 ha-790780 kubelet[1310]: E0923 10:57:42.767974    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089062766959170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:52 ha-790780 kubelet[1310]: E0923 10:57:52.770316    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089072770030326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:52 ha-790780 kubelet[1310]: E0923 10:57:52.770429    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089072770030326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.632462    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 10:58:02 ha-790780 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.773513    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089082773175802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.773536    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089082773175802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:12 ha-790780 kubelet[1310]: E0923 10:58:12.775728    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089092775452254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:12 ha-790780 kubelet[1310]: E0923 10:58:12.775771    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089092775452254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:22 ha-790780 kubelet[1310]: E0923 10:58:22.777799    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089102777431416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:22 ha-790780 kubelet[1310]: E0923 10:58:22.778161    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089102777431416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:32 ha-790780 kubelet[1310]: E0923 10:58:32.780237    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089112779957598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:32 ha-790780 kubelet[1310]: E0923 10:58:32.780276    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089112779957598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-790780 -n ha-790780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-790780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr: (4.178181439s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-790780 -n ha-790780
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 logs -n 25: (1.381362328s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780:/home/docker/cp-test_ha-790780-m03_ha-790780.txt                      |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780 sudo cat                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780.txt                                |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m04 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp testdata/cp-test.txt                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780:/home/docker/cp-test_ha-790780-m04_ha-790780.txt                      |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780 sudo cat                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780.txt                                |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03:/home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m03 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-790780 node stop m02 -v=7                                                    | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-790780 node start m02 -v=7                                                   | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:58 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:51:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:51:23.890810   24995 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:51:23.891041   24995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:51:23.891049   24995 out.go:358] Setting ErrFile to fd 2...
	I0923 10:51:23.891053   24995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:51:23.891205   24995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:51:23.891746   24995 out.go:352] Setting JSON to false
	I0923 10:51:23.892628   24995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2027,"bootTime":1727086657,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:51:23.892719   24995 start.go:139] virtualization: kvm guest
	I0923 10:51:23.894714   24995 out.go:177] * [ha-790780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:51:23.896009   24995 notify.go:220] Checking for updates...
	I0923 10:51:23.896015   24995 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:51:23.897316   24995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:51:23.898483   24995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:51:23.899745   24995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:23.900930   24995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:51:23.902097   24995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:51:23.903412   24995 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:51:23.936575   24995 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 10:51:23.937738   24995 start.go:297] selected driver: kvm2
	I0923 10:51:23.937760   24995 start.go:901] validating driver "kvm2" against <nil>
	I0923 10:51:23.937777   24995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:51:23.938571   24995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:51:23.938654   24995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 10:51:23.953375   24995 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 10:51:23.953445   24995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:51:23.953711   24995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:51:23.953749   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:51:23.953813   24995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 10:51:23.953825   24995 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:51:23.953893   24995 start.go:340] cluster config:
	{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 10:51:23.954007   24995 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:51:23.956292   24995 out.go:177] * Starting "ha-790780" primary control-plane node in "ha-790780" cluster
	I0923 10:51:23.957482   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:51:23.957517   24995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:51:23.957529   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:51:23.957599   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:51:23.957611   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:51:23.957934   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:51:23.957961   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json: {Name:mk715d227144254f94a596853caa0306f08b9846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:23.958130   24995 start.go:360] acquireMachinesLock for ha-790780: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:51:23.958172   24995 start.go:364] duration metric: took 22.743µs to acquireMachinesLock for "ha-790780"
	I0923 10:51:23.958195   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:51:23.958264   24995 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 10:51:23.959776   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:51:23.959913   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:51:23.959959   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:51:23.974405   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0923 10:51:23.974852   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:51:23.975494   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:51:23.975517   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:51:23.975789   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:51:23.975953   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:23.976064   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:23.976227   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:51:23.976305   24995 client.go:168] LocalClient.Create starting
	I0923 10:51:23.976394   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:51:23.976453   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:51:23.976474   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:51:23.976558   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:51:23.976590   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:51:23.976607   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:51:23.976637   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:51:23.976646   24995 main.go:141] libmachine: (ha-790780) Calling .PreCreateCheck
	I0923 10:51:23.976933   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:23.977298   24995 main.go:141] libmachine: Creating machine...
	I0923 10:51:23.977310   24995 main.go:141] libmachine: (ha-790780) Calling .Create
	I0923 10:51:23.977514   24995 main.go:141] libmachine: (ha-790780) Creating KVM machine...
	I0923 10:51:23.978674   24995 main.go:141] libmachine: (ha-790780) DBG | found existing default KVM network
	I0923 10:51:23.979392   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:23.979247   25018 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0923 10:51:23.979430   24995 main.go:141] libmachine: (ha-790780) DBG | created network xml: 
	I0923 10:51:23.979450   24995 main.go:141] libmachine: (ha-790780) DBG | <network>
	I0923 10:51:23.979460   24995 main.go:141] libmachine: (ha-790780) DBG |   <name>mk-ha-790780</name>
	I0923 10:51:23.979472   24995 main.go:141] libmachine: (ha-790780) DBG |   <dns enable='no'/>
	I0923 10:51:23.979483   24995 main.go:141] libmachine: (ha-790780) DBG |   
	I0923 10:51:23.979494   24995 main.go:141] libmachine: (ha-790780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 10:51:23.979499   24995 main.go:141] libmachine: (ha-790780) DBG |     <dhcp>
	I0923 10:51:23.979504   24995 main.go:141] libmachine: (ha-790780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 10:51:23.979512   24995 main.go:141] libmachine: (ha-790780) DBG |     </dhcp>
	I0923 10:51:23.979520   24995 main.go:141] libmachine: (ha-790780) DBG |   </ip>
	I0923 10:51:23.979526   24995 main.go:141] libmachine: (ha-790780) DBG |   
	I0923 10:51:23.979532   24995 main.go:141] libmachine: (ha-790780) DBG | </network>
	I0923 10:51:23.979541   24995 main.go:141] libmachine: (ha-790780) DBG | 
	I0923 10:51:23.984532   24995 main.go:141] libmachine: (ha-790780) DBG | trying to create private KVM network mk-ha-790780 192.168.39.0/24...
	I0923 10:51:24.046915   24995 main.go:141] libmachine: (ha-790780) DBG | private KVM network mk-ha-790780 192.168.39.0/24 created
	I0923 10:51:24.046951   24995 main.go:141] libmachine: (ha-790780) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 ...
	I0923 10:51:24.046970   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.046901   25018 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:24.046982   24995 main.go:141] libmachine: (ha-790780) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:51:24.047052   24995 main.go:141] libmachine: (ha-790780) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:51:24.290133   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.289993   25018 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa...
	I0923 10:51:24.626743   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.626586   25018 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/ha-790780.rawdisk...
	I0923 10:51:24.626779   24995 main.go:141] libmachine: (ha-790780) DBG | Writing magic tar header
	I0923 10:51:24.626794   24995 main.go:141] libmachine: (ha-790780) DBG | Writing SSH key tar header
	I0923 10:51:24.626805   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:24.626737   25018 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 ...
	I0923 10:51:24.626913   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780 (perms=drwx------)
	I0923 10:51:24.626940   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780
	I0923 10:51:24.626950   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:51:24.626966   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:51:24.626976   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:51:24.626990   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:51:24.627002   24995 main.go:141] libmachine: (ha-790780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:51:24.627026   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:51:24.627037   24995 main.go:141] libmachine: (ha-790780) Creating domain...
	I0923 10:51:24.627047   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:51:24.627061   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:51:24.627079   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:51:24.627093   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:51:24.627102   24995 main.go:141] libmachine: (ha-790780) DBG | Checking permissions on dir: /home
	I0923 10:51:24.627113   24995 main.go:141] libmachine: (ha-790780) DBG | Skipping /home - not owner
	I0923 10:51:24.628104   24995 main.go:141] libmachine: (ha-790780) define libvirt domain using xml: 
	I0923 10:51:24.628127   24995 main.go:141] libmachine: (ha-790780) <domain type='kvm'>
	I0923 10:51:24.628137   24995 main.go:141] libmachine: (ha-790780)   <name>ha-790780</name>
	I0923 10:51:24.628145   24995 main.go:141] libmachine: (ha-790780)   <memory unit='MiB'>2200</memory>
	I0923 10:51:24.628153   24995 main.go:141] libmachine: (ha-790780)   <vcpu>2</vcpu>
	I0923 10:51:24.628162   24995 main.go:141] libmachine: (ha-790780)   <features>
	I0923 10:51:24.628169   24995 main.go:141] libmachine: (ha-790780)     <acpi/>
	I0923 10:51:24.628175   24995 main.go:141] libmachine: (ha-790780)     <apic/>
	I0923 10:51:24.628183   24995 main.go:141] libmachine: (ha-790780)     <pae/>
	I0923 10:51:24.628200   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628210   24995 main.go:141] libmachine: (ha-790780)   </features>
	I0923 10:51:24.628219   24995 main.go:141] libmachine: (ha-790780)   <cpu mode='host-passthrough'>
	I0923 10:51:24.628231   24995 main.go:141] libmachine: (ha-790780)   
	I0923 10:51:24.628242   24995 main.go:141] libmachine: (ha-790780)   </cpu>
	I0923 10:51:24.628248   24995 main.go:141] libmachine: (ha-790780)   <os>
	I0923 10:51:24.628256   24995 main.go:141] libmachine: (ha-790780)     <type>hvm</type>
	I0923 10:51:24.628266   24995 main.go:141] libmachine: (ha-790780)     <boot dev='cdrom'/>
	I0923 10:51:24.628274   24995 main.go:141] libmachine: (ha-790780)     <boot dev='hd'/>
	I0923 10:51:24.628283   24995 main.go:141] libmachine: (ha-790780)     <bootmenu enable='no'/>
	I0923 10:51:24.628289   24995 main.go:141] libmachine: (ha-790780)   </os>
	I0923 10:51:24.628298   24995 main.go:141] libmachine: (ha-790780)   <devices>
	I0923 10:51:24.628316   24995 main.go:141] libmachine: (ha-790780)     <disk type='file' device='cdrom'>
	I0923 10:51:24.628332   24995 main.go:141] libmachine: (ha-790780)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/boot2docker.iso'/>
	I0923 10:51:24.628339   24995 main.go:141] libmachine: (ha-790780)       <target dev='hdc' bus='scsi'/>
	I0923 10:51:24.628343   24995 main.go:141] libmachine: (ha-790780)       <readonly/>
	I0923 10:51:24.628348   24995 main.go:141] libmachine: (ha-790780)     </disk>
	I0923 10:51:24.628352   24995 main.go:141] libmachine: (ha-790780)     <disk type='file' device='disk'>
	I0923 10:51:24.628365   24995 main.go:141] libmachine: (ha-790780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:51:24.628374   24995 main.go:141] libmachine: (ha-790780)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/ha-790780.rawdisk'/>
	I0923 10:51:24.628379   24995 main.go:141] libmachine: (ha-790780)       <target dev='hda' bus='virtio'/>
	I0923 10:51:24.628383   24995 main.go:141] libmachine: (ha-790780)     </disk>
	I0923 10:51:24.628388   24995 main.go:141] libmachine: (ha-790780)     <interface type='network'>
	I0923 10:51:24.628398   24995 main.go:141] libmachine: (ha-790780)       <source network='mk-ha-790780'/>
	I0923 10:51:24.628422   24995 main.go:141] libmachine: (ha-790780)       <model type='virtio'/>
	I0923 10:51:24.628441   24995 main.go:141] libmachine: (ha-790780)     </interface>
	I0923 10:51:24.628451   24995 main.go:141] libmachine: (ha-790780)     <interface type='network'>
	I0923 10:51:24.628456   24995 main.go:141] libmachine: (ha-790780)       <source network='default'/>
	I0923 10:51:24.628464   24995 main.go:141] libmachine: (ha-790780)       <model type='virtio'/>
	I0923 10:51:24.628468   24995 main.go:141] libmachine: (ha-790780)     </interface>
	I0923 10:51:24.628474   24995 main.go:141] libmachine: (ha-790780)     <serial type='pty'>
	I0923 10:51:24.628489   24995 main.go:141] libmachine: (ha-790780)       <target port='0'/>
	I0923 10:51:24.628497   24995 main.go:141] libmachine: (ha-790780)     </serial>
	I0923 10:51:24.628501   24995 main.go:141] libmachine: (ha-790780)     <console type='pty'>
	I0923 10:51:24.628509   24995 main.go:141] libmachine: (ha-790780)       <target type='serial' port='0'/>
	I0923 10:51:24.628513   24995 main.go:141] libmachine: (ha-790780)     </console>
	I0923 10:51:24.628518   24995 main.go:141] libmachine: (ha-790780)     <rng model='virtio'>
	I0923 10:51:24.628524   24995 main.go:141] libmachine: (ha-790780)       <backend model='random'>/dev/random</backend>
	I0923 10:51:24.628536   24995 main.go:141] libmachine: (ha-790780)     </rng>
	I0923 10:51:24.628558   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628571   24995 main.go:141] libmachine: (ha-790780)     
	I0923 10:51:24.628577   24995 main.go:141] libmachine: (ha-790780)   </devices>
	I0923 10:51:24.628588   24995 main.go:141] libmachine: (ha-790780) </domain>
	I0923 10:51:24.628594   24995 main.go:141] libmachine: (ha-790780) 
	I0923 10:51:24.633208   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:13:36:c6 in network default
	I0923 10:51:24.633757   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:24.633774   24995 main.go:141] libmachine: (ha-790780) Ensuring networks are active...
	I0923 10:51:24.634465   24995 main.go:141] libmachine: (ha-790780) Ensuring network default is active
	I0923 10:51:24.634776   24995 main.go:141] libmachine: (ha-790780) Ensuring network mk-ha-790780 is active
	I0923 10:51:24.635311   24995 main.go:141] libmachine: (ha-790780) Getting domain xml...
	I0923 10:51:24.635925   24995 main.go:141] libmachine: (ha-790780) Creating domain...
	I0923 10:51:25.814040   24995 main.go:141] libmachine: (ha-790780) Waiting to get IP...
	I0923 10:51:25.814916   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:25.815340   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:25.815417   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:25.815355   25018 retry.go:31] will retry after 302.426541ms: waiting for machine to come up
	I0923 10:51:26.119886   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.120307   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.120331   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.120269   25018 retry.go:31] will retry after 296.601666ms: waiting for machine to come up
	I0923 10:51:26.418700   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.419028   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.419055   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.418981   25018 retry.go:31] will retry after 377.849162ms: waiting for machine to come up
	I0923 10:51:26.798501   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:26.798922   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:26.798948   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:26.798856   25018 retry.go:31] will retry after 450.118776ms: waiting for machine to come up
	I0923 10:51:27.250394   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:27.250790   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:27.250808   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:27.250758   25018 retry.go:31] will retry after 570.631994ms: waiting for machine to come up
	I0923 10:51:27.822428   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:27.822886   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:27.822908   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:27.822851   25018 retry.go:31] will retry after 623.272262ms: waiting for machine to come up
	I0923 10:51:28.447752   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:28.448147   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:28.448174   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:28.448108   25018 retry.go:31] will retry after 1.077429863s: waiting for machine to come up
	I0923 10:51:29.527061   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:29.527469   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:29.527505   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:29.527430   25018 retry.go:31] will retry after 917.693346ms: waiting for machine to come up
	I0923 10:51:30.446246   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:30.446572   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:30.446596   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:30.446529   25018 retry.go:31] will retry after 1.557196838s: waiting for machine to come up
	I0923 10:51:32.006148   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:32.006519   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:32.006543   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:32.006479   25018 retry.go:31] will retry after 2.085720919s: waiting for machine to come up
	I0923 10:51:34.093658   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:34.094039   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:34.094071   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:34.093997   25018 retry.go:31] will retry after 2.432097525s: waiting for machine to come up
	I0923 10:51:36.529456   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:36.529801   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:36.529829   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:36.529771   25018 retry.go:31] will retry after 3.373414151s: waiting for machine to come up
	I0923 10:51:39.904476   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:39.904832   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find current IP address of domain ha-790780 in network mk-ha-790780
	I0923 10:51:39.904859   24995 main.go:141] libmachine: (ha-790780) DBG | I0923 10:51:39.904782   25018 retry.go:31] will retry after 4.54310411s: waiting for machine to come up
	I0923 10:51:44.449079   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.449524   24995 main.go:141] libmachine: (ha-790780) Found IP for machine: 192.168.39.234
	I0923 10:51:44.449566   24995 main.go:141] libmachine: (ha-790780) Reserving static IP address...
	I0923 10:51:44.449583   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has current primary IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.449899   24995 main.go:141] libmachine: (ha-790780) DBG | unable to find host DHCP lease matching {name: "ha-790780", mac: "52:54:00:56:51:7d", ip: "192.168.39.234"} in network mk-ha-790780
	I0923 10:51:44.518563   24995 main.go:141] libmachine: (ha-790780) DBG | Getting to WaitForSSH function...
	I0923 10:51:44.518595   24995 main.go:141] libmachine: (ha-790780) Reserved static IP address: 192.168.39.234
	I0923 10:51:44.518615   24995 main.go:141] libmachine: (ha-790780) Waiting for SSH to be available...
	I0923 10:51:44.520920   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.521300   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.521330   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.521451   24995 main.go:141] libmachine: (ha-790780) DBG | Using SSH client type: external
	I0923 10:51:44.521486   24995 main.go:141] libmachine: (ha-790780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa (-rw-------)
	I0923 10:51:44.521531   24995 main.go:141] libmachine: (ha-790780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:51:44.521546   24995 main.go:141] libmachine: (ha-790780) DBG | About to run SSH command:
	I0923 10:51:44.521554   24995 main.go:141] libmachine: (ha-790780) DBG | exit 0
	I0923 10:51:44.645412   24995 main.go:141] libmachine: (ha-790780) DBG | SSH cmd err, output: <nil>: 
	I0923 10:51:44.645692   24995 main.go:141] libmachine: (ha-790780) KVM machine creation complete!
	I0923 10:51:44.645984   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:44.646583   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:44.646744   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:44.646893   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:51:44.646905   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:51:44.648172   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:51:44.648194   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:51:44.648202   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:51:44.648210   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.650665   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.650987   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.651020   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.651139   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.651308   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.651457   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.651573   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.651700   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.651893   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.651906   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:51:44.756746   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:51:44.756773   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:51:44.756782   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.759344   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.759648   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.759681   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.759843   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.760022   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.760232   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.760420   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.760578   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.760787   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.760799   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:51:44.870171   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:51:44.870267   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:51:44.870273   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:51:44.870280   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:44.870545   24995 buildroot.go:166] provisioning hostname "ha-790780"
	I0923 10:51:44.870571   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:44.870747   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.873216   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.873593   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.873628   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.873723   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.873886   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.874025   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.874142   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.874274   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.874442   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.874453   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780 && echo "ha-790780" | sudo tee /etc/hostname
	I0923 10:51:44.995765   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780
	
	I0923 10:51:44.995787   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:44.998312   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.998668   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:44.998696   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:44.998853   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:44.999016   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.999145   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:44.999274   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:44.999435   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:44.999654   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:44.999678   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:51:45.115136   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:51:45.115177   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:51:45.115207   24995 buildroot.go:174] setting up certificates
	I0923 10:51:45.115216   24995 provision.go:84] configureAuth start
	I0923 10:51:45.115226   24995 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 10:51:45.115475   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.117929   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.118257   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.118279   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.118435   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.120330   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.120597   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.120620   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.120789   24995 provision.go:143] copyHostCerts
	I0923 10:51:45.120818   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:51:45.120862   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:51:45.120884   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:51:45.120966   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:51:45.121085   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:51:45.121144   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:51:45.121152   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:51:45.121191   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:51:45.121264   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:51:45.121286   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:51:45.121292   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:51:45.121321   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:51:45.121410   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780 san=[127.0.0.1 192.168.39.234 ha-790780 localhost minikube]
	I0923 10:51:45.266715   24995 provision.go:177] copyRemoteCerts
	I0923 10:51:45.266777   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:51:45.266798   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.269666   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.269959   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.269988   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.270213   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.270378   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.270482   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.270568   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.355778   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:51:45.355843   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:51:45.380730   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:51:45.380795   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 10:51:45.414661   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:51:45.414743   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:51:45.441465   24995 provision.go:87] duration metric: took 326.238007ms to configureAuth
	I0923 10:51:45.441495   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:51:45.441678   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:51:45.441758   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.444126   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.444463   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.444481   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.444672   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.444841   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.445006   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.445137   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.445259   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:45.445469   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:45.445484   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:51:45.681011   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:51:45.681063   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:51:45.681071   24995 main.go:141] libmachine: (ha-790780) Calling .GetURL
	I0923 10:51:45.682285   24995 main.go:141] libmachine: (ha-790780) DBG | Using libvirt version 6000000
	I0923 10:51:45.684579   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.684908   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.684938   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.685089   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:51:45.685101   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:51:45.685107   24995 client.go:171] duration metric: took 21.708786455s to LocalClient.Create
	I0923 10:51:45.685125   24995 start.go:167] duration metric: took 21.708900673s to libmachine.API.Create "ha-790780"
	I0923 10:51:45.685138   24995 start.go:293] postStartSetup for "ha-790780" (driver="kvm2")
	I0923 10:51:45.685151   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:51:45.685172   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.685421   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:51:45.685449   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.687596   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.687908   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.687933   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.688073   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.688250   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.688408   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.688548   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.771920   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:51:45.776355   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:51:45.776391   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:51:45.776469   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:51:45.776563   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:51:45.776575   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:51:45.776693   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:51:45.786199   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:51:45.811518   24995 start.go:296] duration metric: took 126.349059ms for postStartSetup
	I0923 10:51:45.811609   24995 main.go:141] libmachine: (ha-790780) Calling .GetConfigRaw
	I0923 10:51:45.812294   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.815129   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.815486   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.815514   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.815712   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:51:45.815895   24995 start.go:128] duration metric: took 21.857620166s to createHost
	I0923 10:51:45.815920   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.818316   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.818630   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.818651   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.818850   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.819010   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.819165   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.819278   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.819431   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:51:45.819590   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 10:51:45.819599   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:51:45.926174   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088705.899223528
	
	I0923 10:51:45.926195   24995 fix.go:216] guest clock: 1727088705.899223528
	I0923 10:51:45.926202   24995 fix.go:229] Guest: 2024-09-23 10:51:45.899223528 +0000 UTC Remote: 2024-09-23 10:51:45.81591122 +0000 UTC m=+21.959703843 (delta=83.312308ms)
	I0923 10:51:45.926237   24995 fix.go:200] guest clock delta is within tolerance: 83.312308ms
	I0923 10:51:45.926247   24995 start.go:83] releasing machines lock for "ha-790780", held for 21.968060369s
	I0923 10:51:45.926269   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.926484   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:45.929017   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.929273   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.929296   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.929451   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.929900   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.930074   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:51:45.930159   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:51:45.930211   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.930270   24995 ssh_runner.go:195] Run: cat /version.json
	I0923 10:51:45.930294   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:51:45.932829   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933159   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.933185   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933203   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933326   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.933490   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.933624   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.933676   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:45.933701   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:45.933776   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:45.934053   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:51:45.934206   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:51:45.934327   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:51:45.934455   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:51:46.030649   24995 ssh_runner.go:195] Run: systemctl --version
	I0923 10:51:46.036429   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:51:46.192093   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:51:46.197962   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:51:46.198029   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:51:46.215140   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:51:46.215162   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:51:46.215243   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:51:46.230784   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:51:46.244349   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:51:46.244409   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:51:46.258034   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:51:46.272100   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:51:46.381469   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:51:46.539101   24995 docker.go:233] disabling docker service ...
	I0923 10:51:46.539174   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:51:46.552908   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:51:46.565651   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:51:46.682294   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:51:46.796364   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:51:46.811412   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:51:46.829576   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:51:46.829645   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.839695   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:51:46.839786   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.849955   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.860106   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.870333   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:51:46.880826   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.891077   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.908248   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:51:46.918775   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:51:46.928824   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:51:46.928877   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:51:46.941980   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:51:46.951517   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:51:47.065808   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:51:47.163613   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:51:47.163683   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:51:47.168401   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:51:47.168449   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:51:47.172083   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:51:47.211404   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:51:47.211475   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:51:47.237894   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:51:47.265905   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:51:47.267109   24995 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 10:51:47.269676   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:47.269976   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:51:47.269998   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:51:47.270189   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:51:47.274345   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:51:47.287451   24995 kubeadm.go:883] updating cluster {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:51:47.287548   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:51:47.287587   24995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:51:47.320493   24995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 10:51:47.320563   24995 ssh_runner.go:195] Run: which lz4
	I0923 10:51:47.324493   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 10:51:47.324590   24995 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 10:51:47.328614   24995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 10:51:47.328641   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 10:51:48.664218   24995 crio.go:462] duration metric: took 1.339658259s to copy over tarball
	I0923 10:51:48.664282   24995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 10:51:50.637991   24995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.973686302s)
	I0923 10:51:50.638022   24995 crio.go:469] duration metric: took 1.973779288s to extract the tarball
	I0923 10:51:50.638029   24995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 10:51:50.675284   24995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:51:50.719521   24995 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:51:50.719546   24995 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:51:50.719554   24995 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.31.1 crio true true} ...
	I0923 10:51:50.719685   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:51:50.719772   24995 ssh_runner.go:195] Run: crio config
	I0923 10:51:50.771719   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:51:50.771741   24995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 10:51:50.771749   24995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:51:50.771771   24995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-790780 NodeName:ha-790780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:51:50.771891   24995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-790780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:51:50.771915   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:51:50.771953   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:51:50.788554   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:51:50.788662   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:51:50.788713   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:51:50.798905   24995 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:51:50.798967   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 10:51:50.808504   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 10:51:50.825113   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:51:50.841896   24995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 10:51:50.858441   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 10:51:50.875731   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:51:50.879691   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:51:50.892112   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:51:51.019767   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:51:51.037039   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.234
	I0923 10:51:51.037069   24995 certs.go:194] generating shared ca certs ...
	I0923 10:51:51.037091   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.037268   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:51:51.037324   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:51:51.037339   24995 certs.go:256] generating profile certs ...
	I0923 10:51:51.037431   24995 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:51:51.037451   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt with IP's: []
	I0923 10:51:51.451020   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt ...
	I0923 10:51:51.451047   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt: {Name:mk7c4e9362162608bb6c01090da1551aaa823d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.451244   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key ...
	I0923 10:51:51.451267   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key: {Name:mkcd6bfa32a894b89017c31deaa26203b3b4a176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.451372   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888
	I0923 10:51:51.451392   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.254]
	I0923 10:51:51.607359   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 ...
	I0923 10:51:51.607386   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888: {Name:mka1f4b6ed48e33311f672d8b442f93c1d7c681f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.607561   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888 ...
	I0923 10:51:51.607580   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888: {Name:mk49e13f50fd1588f0bd343a1960a01127e6eea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.607676   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.cfe6b888 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:51:51.607836   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.cfe6b888 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:51:51.607925   24995 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:51:51.607944   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt with IP's: []
	I0923 10:51:51.677169   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt ...
	I0923 10:51:51.677196   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt: {Name:mkd6d1ef61128b90a97b097c5fd8695ddf079ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.677369   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key ...
	I0923 10:51:51.677400   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key: {Name:mk47fffc62dd3ae10bfeea7ae4b46f34ad5c053e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:51:51.677517   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:51:51.677535   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:51:51.677548   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:51:51.677618   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:51:51.677647   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:51:51.677668   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:51:51.677686   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:51:51.677703   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:51:51.677763   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:51:51.677808   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:51:51.677821   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:51:51.677855   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:51:51.677884   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:51:51.677916   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:51:51.677966   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:51:51.678003   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:51.678023   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:51:51.678049   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.679006   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:51:51.705139   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:51:51.728566   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:51:51.751552   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:51:51.775089   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 10:51:51.801987   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:51:51.826155   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:51:51.852767   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:51:51.876344   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:51:51.905311   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:51:51.928779   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:51:51.952260   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:51:51.969409   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:51:51.975384   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:51:51.986501   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.990964   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.991023   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:51:51.996747   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:51:52.007942   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:51:52.018842   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.023215   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.023268   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:51:52.028919   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:51:52.039648   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:51:52.050482   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.054942   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.054996   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:51:52.061057   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:51:52.072692   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:51:52.076951   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:51:52.077018   24995 kubeadm.go:392] StartCluster: {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:51:52.077118   24995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:51:52.077175   24995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:51:52.116347   24995 cri.go:89] found id: ""
	I0923 10:51:52.116428   24995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:51:52.126761   24995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:51:52.140367   24995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:51:52.152008   24995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:51:52.152029   24995 kubeadm.go:157] found existing configuration files:
	
	I0923 10:51:52.152082   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:51:52.162100   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:51:52.162178   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:51:52.172716   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:51:52.182352   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:51:52.182416   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:51:52.192324   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:51:52.201509   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:51:52.201567   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:51:52.211076   24995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:51:52.220241   24995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:51:52.220301   24995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:51:52.229931   24995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:51:52.330228   24995 kubeadm.go:310] W0923 10:51:52.311529     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:51:52.331060   24995 kubeadm.go:310] W0923 10:51:52.312477     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:51:52.439125   24995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:52:03.033231   24995 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:52:03.033332   24995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:52:03.033492   24995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:52:03.033623   24995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:52:03.033751   24995 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:52:03.033844   24995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:52:03.035457   24995 out.go:235]   - Generating certificates and keys ...
	I0923 10:52:03.035550   24995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:52:03.035642   24995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:52:03.035741   24995 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:52:03.035823   24995 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:52:03.035900   24995 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:52:03.035992   24995 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:52:03.036084   24995 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:52:03.036211   24995 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-790780 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0923 10:52:03.036285   24995 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:52:03.036444   24995 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-790780 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0923 10:52:03.036563   24995 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:52:03.036657   24995 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:52:03.036710   24995 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:52:03.036757   24995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:52:03.036842   24995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:52:03.036923   24995 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:52:03.037009   24995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:52:03.037098   24995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:52:03.037182   24995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:52:03.037302   24995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:52:03.037427   24995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:52:03.038904   24995 out.go:235]   - Booting up control plane ...
	I0923 10:52:03.039001   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:52:03.039082   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:52:03.039176   24995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:52:03.039295   24995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:52:03.039422   24995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:52:03.039482   24995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:52:03.039635   24995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:52:03.039761   24995 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:52:03.039849   24995 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.524673ms
	I0923 10:52:03.039940   24995 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:52:03.040024   24995 kubeadm.go:310] [api-check] The API server is healthy after 5.986201438s
	I0923 10:52:03.040175   24995 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:52:03.040361   24995 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:52:03.040444   24995 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:52:03.040632   24995 kubeadm.go:310] [mark-control-plane] Marking the node ha-790780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:52:03.040704   24995 kubeadm.go:310] [bootstrap-token] Using token: xsoed2.p6r9ib7q4k96hg0w
	I0923 10:52:03.042019   24995 out.go:235]   - Configuring RBAC rules ...
	I0923 10:52:03.042101   24995 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:52:03.042173   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:52:03.042294   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:52:03.042406   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:52:03.042505   24995 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:52:03.042577   24995 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:52:03.042670   24995 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:52:03.042707   24995 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:52:03.042747   24995 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:52:03.042753   24995 kubeadm.go:310] 
	I0923 10:52:03.042801   24995 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:52:03.042807   24995 kubeadm.go:310] 
	I0923 10:52:03.042880   24995 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:52:03.042886   24995 kubeadm.go:310] 
	I0923 10:52:03.042910   24995 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:52:03.042960   24995 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:52:03.043006   24995 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:52:03.043012   24995 kubeadm.go:310] 
	I0923 10:52:03.043055   24995 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:52:03.043062   24995 kubeadm.go:310] 
	I0923 10:52:03.043106   24995 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:52:03.043112   24995 kubeadm.go:310] 
	I0923 10:52:03.043171   24995 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:52:03.043244   24995 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:52:03.043303   24995 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:52:03.043309   24995 kubeadm.go:310] 
	I0923 10:52:03.043383   24995 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:52:03.043484   24995 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:52:03.043504   24995 kubeadm.go:310] 
	I0923 10:52:03.043608   24995 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xsoed2.p6r9ib7q4k96hg0w \
	I0923 10:52:03.043699   24995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 \
	I0923 10:52:03.043719   24995 kubeadm.go:310] 	--control-plane 
	I0923 10:52:03.043725   24995 kubeadm.go:310] 
	I0923 10:52:03.043823   24995 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:52:03.043833   24995 kubeadm.go:310] 
	I0923 10:52:03.043941   24995 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xsoed2.p6r9ib7q4k96hg0w \
	I0923 10:52:03.044037   24995 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 
	I0923 10:52:03.044047   24995 cni.go:84] Creating CNI manager for ""
	I0923 10:52:03.044054   24995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 10:52:03.045502   24995 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 10:52:03.046832   24995 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 10:52:03.052467   24995 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 10:52:03.052487   24995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 10:52:03.076247   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 10:52:03.444143   24995 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:52:03.444243   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:03.444282   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780 minikube.k8s.io/updated_at=2024_09_23T10_52_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=true
	I0923 10:52:03.495007   24995 ops.go:34] apiserver oom_adj: -16
	I0923 10:52:03.592144   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:04.092654   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:04.592338   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:05.092806   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:05.592594   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:06.092195   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:52:06.201502   24995 kubeadm.go:1113] duration metric: took 2.757318832s to wait for elevateKubeSystemPrivileges
	I0923 10:52:06.201546   24995 kubeadm.go:394] duration metric: took 14.124531532s to StartCluster
	I0923 10:52:06.201569   24995 settings.go:142] acquiring lock: {Name:mka0fc37129eef8f35af2c1a6ddc567156410b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:06.201664   24995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:52:06.202567   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/kubeconfig: {Name:mk40a9897a5577a89be748f874c2066abd769fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:06.202810   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:52:06.202807   24995 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:06.202841   24995 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 10:52:06.202900   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:52:06.202929   24995 addons.go:69] Setting storage-provisioner=true in profile "ha-790780"
	I0923 10:52:06.202937   24995 addons.go:69] Setting default-storageclass=true in profile "ha-790780"
	I0923 10:52:06.202954   24995 addons.go:234] Setting addon storage-provisioner=true in "ha-790780"
	I0923 10:52:06.202961   24995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-790780"
	I0923 10:52:06.202988   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:06.203012   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:06.203296   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.203334   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.203433   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.203475   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.218688   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0923 10:52:06.218748   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42755
	I0923 10:52:06.219240   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.219291   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.219815   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.219816   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.219840   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.219858   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.220231   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.220235   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.220427   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.220753   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.220795   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.222626   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:52:06.222971   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 10:52:06.223539   24995 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 10:52:06.223901   24995 addons.go:234] Setting addon default-storageclass=true in "ha-790780"
	I0923 10:52:06.223946   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:06.224319   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.224365   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.236739   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0923 10:52:06.237265   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.237749   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.237769   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.238124   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.238287   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.238667   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
	I0923 10:52:06.239113   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.239656   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.239679   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.239955   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.239993   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:06.240401   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.240443   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.241840   24995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:52:06.243145   24995 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:52:06.243160   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:52:06.243172   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:06.246249   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.246639   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:06.246666   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.246813   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:06.246982   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:06.247123   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:06.247259   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:06.256004   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0923 10:52:06.256499   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.256973   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.256999   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.257343   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.257522   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:06.259210   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:06.259387   24995 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:52:06.259399   24995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:52:06.259412   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:06.262267   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.262666   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:06.262687   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:06.262832   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:06.262990   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:06.263138   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:06.263273   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:06.304503   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:52:06.398460   24995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:52:06.446811   24995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:52:06.632495   24995 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 10:52:06.919542   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919563   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919636   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919658   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919873   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.919902   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.919910   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.919919   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.919926   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.919965   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.920081   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920099   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920119   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920133   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920197   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.920208   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.920378   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.920390   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.920407   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.920451   24995 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 10:52:06.920471   24995 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 10:52:06.920600   24995 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 10:52:06.920610   24995 round_trippers.go:469] Request Headers:
	I0923 10:52:06.920623   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:52:06.920629   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:52:06.937923   24995 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0923 10:52:06.938595   24995 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 10:52:06.938612   24995 round_trippers.go:469] Request Headers:
	I0923 10:52:06.938621   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:52:06.938629   24995 round_trippers.go:473]     Content-Type: application/json
	I0923 10:52:06.938632   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:52:06.947896   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:52:06.948322   24995 main.go:141] libmachine: Making call to close driver server
	I0923 10:52:06.948337   24995 main.go:141] libmachine: (ha-790780) Calling .Close
	I0923 10:52:06.948594   24995 main.go:141] libmachine: (ha-790780) DBG | Closing plugin on server side
	I0923 10:52:06.948617   24995 main.go:141] libmachine: Successfully made call to close driver server
	I0923 10:52:06.948630   24995 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 10:52:06.950152   24995 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 10:52:06.951554   24995 addons.go:510] duration metric: took 748.719933ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 10:52:06.951590   24995 start.go:246] waiting for cluster config update ...
	I0923 10:52:06.951605   24995 start.go:255] writing updated cluster config ...
	I0923 10:52:06.953365   24995 out.go:201] 
	I0923 10:52:06.954972   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:06.955040   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:06.956615   24995 out.go:177] * Starting "ha-790780-m02" control-plane node in "ha-790780" cluster
	I0923 10:52:06.957684   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:52:06.957708   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:52:06.957808   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:52:06.957819   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:52:06.957884   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:06.958050   24995 start.go:360] acquireMachinesLock for ha-790780-m02: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:52:06.958105   24995 start.go:364] duration metric: took 32.264µs to acquireMachinesLock for "ha-790780-m02"
	I0923 10:52:06.958126   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:06.958191   24995 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0923 10:52:06.959878   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:52:06.959980   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:06.960026   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:06.976035   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0923 10:52:06.976582   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:06.977118   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:06.977143   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:06.977540   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:06.977757   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:06.977903   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:06.978091   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:52:06.978121   24995 client.go:168] LocalClient.Create starting
	I0923 10:52:06.978164   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:52:06.978206   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:52:06.978227   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:52:06.978286   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:52:06.978303   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:52:06.978310   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:52:06.978324   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:52:06.978329   24995 main.go:141] libmachine: (ha-790780-m02) Calling .PreCreateCheck
	I0923 10:52:06.978542   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:06.978925   24995 main.go:141] libmachine: Creating machine...
	I0923 10:52:06.978941   24995 main.go:141] libmachine: (ha-790780-m02) Calling .Create
	I0923 10:52:06.979102   24995 main.go:141] libmachine: (ha-790780-m02) Creating KVM machine...
	I0923 10:52:06.980456   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found existing default KVM network
	I0923 10:52:06.980575   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found existing private KVM network mk-ha-790780
	I0923 10:52:06.980736   24995 main.go:141] libmachine: (ha-790780-m02) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 ...
	I0923 10:52:06.980762   24995 main.go:141] libmachine: (ha-790780-m02) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:52:06.980809   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:06.980717   25359 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:52:06.980894   24995 main.go:141] libmachine: (ha-790780-m02) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:52:07.232203   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.232068   25359 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa...
	I0923 10:52:07.333393   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.333263   25359 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/ha-790780-m02.rawdisk...
	I0923 10:52:07.333421   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Writing magic tar header
	I0923 10:52:07.333438   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Writing SSH key tar header
	I0923 10:52:07.333446   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:07.333398   25359 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 ...
	I0923 10:52:07.333511   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02
	I0923 10:52:07.333532   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:52:07.333540   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02 (perms=drwx------)
	I0923 10:52:07.333557   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:52:07.333571   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:52:07.333582   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:52:07.333598   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:52:07.333609   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:52:07.333623   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:52:07.333638   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:52:07.333647   24995 main.go:141] libmachine: (ha-790780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:52:07.333658   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:52:07.333669   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Checking permissions on dir: /home
	I0923 10:52:07.333679   24995 main.go:141] libmachine: (ha-790780-m02) Creating domain...
	I0923 10:52:07.333718   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Skipping /home - not owner
	I0923 10:52:07.334599   24995 main.go:141] libmachine: (ha-790780-m02) define libvirt domain using xml: 
	I0923 10:52:07.334622   24995 main.go:141] libmachine: (ha-790780-m02) <domain type='kvm'>
	I0923 10:52:07.334660   24995 main.go:141] libmachine: (ha-790780-m02)   <name>ha-790780-m02</name>
	I0923 10:52:07.334682   24995 main.go:141] libmachine: (ha-790780-m02)   <memory unit='MiB'>2200</memory>
	I0923 10:52:07.334692   24995 main.go:141] libmachine: (ha-790780-m02)   <vcpu>2</vcpu>
	I0923 10:52:07.334705   24995 main.go:141] libmachine: (ha-790780-m02)   <features>
	I0923 10:52:07.334717   24995 main.go:141] libmachine: (ha-790780-m02)     <acpi/>
	I0923 10:52:07.334724   24995 main.go:141] libmachine: (ha-790780-m02)     <apic/>
	I0923 10:52:07.334732   24995 main.go:141] libmachine: (ha-790780-m02)     <pae/>
	I0923 10:52:07.334741   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.334753   24995 main.go:141] libmachine: (ha-790780-m02)   </features>
	I0923 10:52:07.334764   24995 main.go:141] libmachine: (ha-790780-m02)   <cpu mode='host-passthrough'>
	I0923 10:52:07.334772   24995 main.go:141] libmachine: (ha-790780-m02)   
	I0923 10:52:07.334781   24995 main.go:141] libmachine: (ha-790780-m02)   </cpu>
	I0923 10:52:07.334789   24995 main.go:141] libmachine: (ha-790780-m02)   <os>
	I0923 10:52:07.334798   24995 main.go:141] libmachine: (ha-790780-m02)     <type>hvm</type>
	I0923 10:52:07.334807   24995 main.go:141] libmachine: (ha-790780-m02)     <boot dev='cdrom'/>
	I0923 10:52:07.334816   24995 main.go:141] libmachine: (ha-790780-m02)     <boot dev='hd'/>
	I0923 10:52:07.334823   24995 main.go:141] libmachine: (ha-790780-m02)     <bootmenu enable='no'/>
	I0923 10:52:07.334834   24995 main.go:141] libmachine: (ha-790780-m02)   </os>
	I0923 10:52:07.334842   24995 main.go:141] libmachine: (ha-790780-m02)   <devices>
	I0923 10:52:07.334853   24995 main.go:141] libmachine: (ha-790780-m02)     <disk type='file' device='cdrom'>
	I0923 10:52:07.334882   24995 main.go:141] libmachine: (ha-790780-m02)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/boot2docker.iso'/>
	I0923 10:52:07.334904   24995 main.go:141] libmachine: (ha-790780-m02)       <target dev='hdc' bus='scsi'/>
	I0923 10:52:07.334913   24995 main.go:141] libmachine: (ha-790780-m02)       <readonly/>
	I0923 10:52:07.334923   24995 main.go:141] libmachine: (ha-790780-m02)     </disk>
	I0923 10:52:07.334932   24995 main.go:141] libmachine: (ha-790780-m02)     <disk type='file' device='disk'>
	I0923 10:52:07.334946   24995 main.go:141] libmachine: (ha-790780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:52:07.334959   24995 main.go:141] libmachine: (ha-790780-m02)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/ha-790780-m02.rawdisk'/>
	I0923 10:52:07.334968   24995 main.go:141] libmachine: (ha-790780-m02)       <target dev='hda' bus='virtio'/>
	I0923 10:52:07.334978   24995 main.go:141] libmachine: (ha-790780-m02)     </disk>
	I0923 10:52:07.334987   24995 main.go:141] libmachine: (ha-790780-m02)     <interface type='network'>
	I0923 10:52:07.334997   24995 main.go:141] libmachine: (ha-790780-m02)       <source network='mk-ha-790780'/>
	I0923 10:52:07.335007   24995 main.go:141] libmachine: (ha-790780-m02)       <model type='virtio'/>
	I0923 10:52:07.335023   24995 main.go:141] libmachine: (ha-790780-m02)     </interface>
	I0923 10:52:07.335035   24995 main.go:141] libmachine: (ha-790780-m02)     <interface type='network'>
	I0923 10:52:07.335044   24995 main.go:141] libmachine: (ha-790780-m02)       <source network='default'/>
	I0923 10:52:07.335058   24995 main.go:141] libmachine: (ha-790780-m02)       <model type='virtio'/>
	I0923 10:52:07.335109   24995 main.go:141] libmachine: (ha-790780-m02)     </interface>
	I0923 10:52:07.335132   24995 main.go:141] libmachine: (ha-790780-m02)     <serial type='pty'>
	I0923 10:52:07.335143   24995 main.go:141] libmachine: (ha-790780-m02)       <target port='0'/>
	I0923 10:52:07.335158   24995 main.go:141] libmachine: (ha-790780-m02)     </serial>
	I0923 10:52:07.335174   24995 main.go:141] libmachine: (ha-790780-m02)     <console type='pty'>
	I0923 10:52:07.335192   24995 main.go:141] libmachine: (ha-790780-m02)       <target type='serial' port='0'/>
	I0923 10:52:07.335204   24995 main.go:141] libmachine: (ha-790780-m02)     </console>
	I0923 10:52:07.335212   24995 main.go:141] libmachine: (ha-790780-m02)     <rng model='virtio'>
	I0923 10:52:07.335225   24995 main.go:141] libmachine: (ha-790780-m02)       <backend model='random'>/dev/random</backend>
	I0923 10:52:07.335234   24995 main.go:141] libmachine: (ha-790780-m02)     </rng>
	I0923 10:52:07.335249   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.335266   24995 main.go:141] libmachine: (ha-790780-m02)     
	I0923 10:52:07.335277   24995 main.go:141] libmachine: (ha-790780-m02)   </devices>
	I0923 10:52:07.335286   24995 main.go:141] libmachine: (ha-790780-m02) </domain>
	I0923 10:52:07.335295   24995 main.go:141] libmachine: (ha-790780-m02) 
	I0923 10:52:07.341524   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:71:94:5b in network default
	I0923 10:52:07.342077   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring networks are active...
	I0923 10:52:07.342095   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:07.342878   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring network default is active
	I0923 10:52:07.343243   24995 main.go:141] libmachine: (ha-790780-m02) Ensuring network mk-ha-790780 is active
	I0923 10:52:07.343596   24995 main.go:141] libmachine: (ha-790780-m02) Getting domain xml...
	I0923 10:52:07.344221   24995 main.go:141] libmachine: (ha-790780-m02) Creating domain...
	I0923 10:52:08.567103   24995 main.go:141] libmachine: (ha-790780-m02) Waiting to get IP...
	I0923 10:52:08.567991   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:08.568397   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:08.568451   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:08.568387   25359 retry.go:31] will retry after 271.175765ms: waiting for machine to come up
	I0923 10:52:08.840977   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:08.841448   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:08.841471   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:08.841414   25359 retry.go:31] will retry after 362.305584ms: waiting for machine to come up
	I0923 10:52:09.205937   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:09.206493   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:09.206603   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:09.206454   25359 retry.go:31] will retry after 321.793905ms: waiting for machine to come up
	I0923 10:52:09.529876   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:09.530376   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:09.530401   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:09.530327   25359 retry.go:31] will retry after 559.183772ms: waiting for machine to come up
	I0923 10:52:10.091098   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:10.091500   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:10.091524   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:10.091457   25359 retry.go:31] will retry after 578.148121ms: waiting for machine to come up
	I0923 10:52:10.671087   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:10.671615   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:10.671645   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:10.671580   25359 retry.go:31] will retry after 633.076035ms: waiting for machine to come up
	I0923 10:52:11.306241   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:11.306681   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:11.306701   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:11.306639   25359 retry.go:31] will retry after 1.109332207s: waiting for machine to come up
	I0923 10:52:12.417432   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:12.417916   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:12.417942   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:12.417872   25359 retry.go:31] will retry after 1.294744351s: waiting for machine to come up
	I0923 10:52:13.713819   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:13.714303   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:13.714329   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:13.714250   25359 retry.go:31] will retry after 1.531952439s: waiting for machine to come up
	I0923 10:52:15.247542   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:15.248025   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:15.248057   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:15.247975   25359 retry.go:31] will retry after 1.941306258s: waiting for machine to come up
	I0923 10:52:17.190839   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:17.191321   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:17.191351   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:17.191284   25359 retry.go:31] will retry after 2.353774872s: waiting for machine to come up
	I0923 10:52:19.546668   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:19.547031   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:19.547055   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:19.546983   25359 retry.go:31] will retry after 2.747965423s: waiting for machine to come up
	I0923 10:52:22.297443   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:22.297864   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:22.297889   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:22.297821   25359 retry.go:31] will retry after 4.500988279s: waiting for machine to come up
	I0923 10:52:26.799947   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:26.800373   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find current IP address of domain ha-790780-m02 in network mk-ha-790780
	I0923 10:52:26.800398   24995 main.go:141] libmachine: (ha-790780-m02) DBG | I0923 10:52:26.800337   25359 retry.go:31] will retry after 3.653543746s: waiting for machine to come up
	I0923 10:52:30.458551   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.459044   24995 main.go:141] libmachine: (ha-790780-m02) Found IP for machine: 192.168.39.43
	I0923 10:52:30.459067   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.459075   24995 main.go:141] libmachine: (ha-790780-m02) Reserving static IP address...
	I0923 10:52:30.459483   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find host DHCP lease matching {name: "ha-790780-m02", mac: "52:54:00:6f:fc:60", ip: "192.168.39.43"} in network mk-ha-790780
	I0923 10:52:30.533257   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Getting to WaitForSSH function...
	I0923 10:52:30.533288   24995 main.go:141] libmachine: (ha-790780-m02) Reserved static IP address: 192.168.39.43
	I0923 10:52:30.533301   24995 main.go:141] libmachine: (ha-790780-m02) Waiting for SSH to be available...
	I0923 10:52:30.536138   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:30.536313   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780
	I0923 10:52:30.536335   24995 main.go:141] libmachine: (ha-790780-m02) DBG | unable to find defined IP address of network mk-ha-790780 interface with MAC address 52:54:00:6f:fc:60
	I0923 10:52:30.536505   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH client type: external
	I0923 10:52:30.536532   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa (-rw-------)
	I0923 10:52:30.536568   24995 main.go:141] libmachine: (ha-790780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:52:30.536590   24995 main.go:141] libmachine: (ha-790780-m02) DBG | About to run SSH command:
	I0923 10:52:30.536606   24995 main.go:141] libmachine: (ha-790780-m02) DBG | exit 0
	I0923 10:52:30.540119   24995 main.go:141] libmachine: (ha-790780-m02) DBG | SSH cmd err, output: exit status 255: 
	I0923 10:52:30.540140   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0923 10:52:30.540147   24995 main.go:141] libmachine: (ha-790780-m02) DBG | command : exit 0
	I0923 10:52:30.540151   24995 main.go:141] libmachine: (ha-790780-m02) DBG | err     : exit status 255
	I0923 10:52:30.540162   24995 main.go:141] libmachine: (ha-790780-m02) DBG | output  : 
	I0923 10:52:33.541623   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Getting to WaitForSSH function...
	I0923 10:52:33.544182   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.544547   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.544574   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.544757   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH client type: external
	I0923 10:52:33.544784   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa (-rw-------)
	I0923 10:52:33.544814   24995 main.go:141] libmachine: (ha-790780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:52:33.544831   24995 main.go:141] libmachine: (ha-790780-m02) DBG | About to run SSH command:
	I0923 10:52:33.544854   24995 main.go:141] libmachine: (ha-790780-m02) DBG | exit 0
	I0923 10:52:33.669504   24995 main.go:141] libmachine: (ha-790780-m02) DBG | SSH cmd err, output: <nil>: 
	I0923 10:52:33.669774   24995 main.go:141] libmachine: (ha-790780-m02) KVM machine creation complete!
	I0923 10:52:33.670110   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:33.670656   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:33.670934   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:33.671133   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:52:33.671150   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetState
	I0923 10:52:33.672305   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:52:33.672319   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:52:33.672324   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:52:33.672329   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.674474   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.674819   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.674843   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.674997   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.675174   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.675328   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.675465   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.675610   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.675839   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.675852   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:52:33.776748   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:52:33.776774   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:52:33.776785   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.779405   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.779751   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.779783   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.779884   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.780088   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.780269   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.780419   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.780568   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.780760   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.780773   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:52:33.882210   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:52:33.882291   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:52:33.882305   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:52:33.882314   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:33.882575   24995 buildroot.go:166] provisioning hostname "ha-790780-m02"
	I0923 10:52:33.882600   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:33.882773   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:33.885308   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.885642   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:33.885677   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:33.885853   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:33.886030   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.886155   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:33.886300   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:33.886430   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:33.886626   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:33.886642   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780-m02 && echo "ha-790780-m02" | sudo tee /etc/hostname
	I0923 10:52:34.003577   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780-m02
	
	I0923 10:52:34.003598   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.006028   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.006433   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.006454   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.006632   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.006821   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.006980   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.007139   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.007310   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.007465   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.007480   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:52:34.118625   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:52:34.118662   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:52:34.118683   24995 buildroot.go:174] setting up certificates
	I0923 10:52:34.118696   24995 provision.go:84] configureAuth start
	I0923 10:52:34.118714   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetMachineName
	I0923 10:52:34.118982   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.121671   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.122010   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.122038   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.122133   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.124342   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.124650   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.124675   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.124825   24995 provision.go:143] copyHostCerts
	I0923 10:52:34.124854   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:52:34.124893   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:52:34.124906   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:52:34.124985   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:52:34.125072   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:52:34.125097   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:52:34.125107   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:52:34.125144   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:52:34.125212   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:52:34.125235   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:52:34.125242   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:52:34.125281   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:52:34.125349   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780-m02 san=[127.0.0.1 192.168.39.43 ha-790780-m02 localhost minikube]
	I0923 10:52:34.193891   24995 provision.go:177] copyRemoteCerts
	I0923 10:52:34.193957   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:52:34.193986   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.196570   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.196865   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.196889   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.197016   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.197136   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.197266   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.197369   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.281916   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:52:34.281976   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:52:34.308044   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:52:34.308105   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:52:34.333433   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:52:34.333520   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:52:34.360112   24995 provision.go:87] duration metric: took 241.398124ms to configureAuth
	I0923 10:52:34.360147   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:52:34.360368   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:34.360455   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.363054   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.363373   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.363404   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.363563   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.363803   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.363983   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.364144   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.364318   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.364480   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.364494   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:52:34.591141   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:52:34.591170   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:52:34.591177   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetURL
	I0923 10:52:34.592369   24995 main.go:141] libmachine: (ha-790780-m02) DBG | Using libvirt version 6000000
	I0923 10:52:34.594796   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.595094   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.595121   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.595270   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:52:34.595283   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:52:34.595290   24995 client.go:171] duration metric: took 27.617159251s to LocalClient.Create
	I0923 10:52:34.595315   24995 start.go:167] duration metric: took 27.61722609s to libmachine.API.Create "ha-790780"
	I0923 10:52:34.595328   24995 start.go:293] postStartSetup for "ha-790780-m02" (driver="kvm2")
	I0923 10:52:34.595341   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:52:34.595379   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.595602   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:52:34.595632   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.597589   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.597898   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.597926   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.598021   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.598195   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.598358   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.598520   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.684195   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:52:34.689242   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:52:34.689272   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:52:34.689348   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:52:34.689459   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:52:34.689471   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:52:34.689556   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:52:34.700320   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:52:34.725191   24995 start.go:296] duration metric: took 129.850231ms for postStartSetup
	I0923 10:52:34.725244   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetConfigRaw
	I0923 10:52:34.725799   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.728545   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.728886   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.728913   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.729093   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:52:34.729294   24995 start.go:128] duration metric: took 27.771090928s to createHost
	I0923 10:52:34.729314   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.731286   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.731644   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.731669   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.731823   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.731990   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.732151   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.732281   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.732440   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:52:34.732637   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0923 10:52:34.732658   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:52:34.834231   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088754.794402068
	
	I0923 10:52:34.834249   24995 fix.go:216] guest clock: 1727088754.794402068
	I0923 10:52:34.834255   24995 fix.go:229] Guest: 2024-09-23 10:52:34.794402068 +0000 UTC Remote: 2024-09-23 10:52:34.729306022 +0000 UTC m=+70.873098644 (delta=65.096046ms)
	I0923 10:52:34.834270   24995 fix.go:200] guest clock delta is within tolerance: 65.096046ms
	I0923 10:52:34.834274   24995 start.go:83] releasing machines lock for "ha-790780-m02", held for 27.876160912s
	I0923 10:52:34.834293   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.834511   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:34.837173   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.837494   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.837520   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.839594   24995 out.go:177] * Found network options:
	I0923 10:52:34.840920   24995 out.go:177]   - NO_PROXY=192.168.39.234
	W0923 10:52:34.842074   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:52:34.842099   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842612   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842764   24995 main.go:141] libmachine: (ha-790780-m02) Calling .DriverName
	I0923 10:52:34.842853   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:52:34.842888   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	W0923 10:52:34.842903   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:52:34.842968   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:52:34.842983   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHHostname
	I0923 10:52:34.845348   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845558   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845701   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.845723   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.845847   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.845942   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:34.845969   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:34.846014   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.846122   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHPort
	I0923 10:52:34.846203   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.846268   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHKeyPath
	I0923 10:52:34.846323   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:34.846389   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetSSHUsername
	I0923 10:52:34.846494   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m02/id_rsa Username:docker}
	I0923 10:52:35.081176   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:52:35.087607   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:52:35.087663   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:52:35.103528   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:52:35.103555   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:52:35.103622   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:52:35.120834   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:52:35.135839   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:52:35.135902   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:52:35.150051   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:52:35.166191   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:52:35.300053   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:52:35.467434   24995 docker.go:233] disabling docker service ...
	I0923 10:52:35.467505   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:52:35.481901   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:52:35.494845   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:52:35.623420   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:52:35.753868   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:52:35.768422   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:52:35.787586   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:52:35.787649   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.799053   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:52:35.799126   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.810558   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.821594   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.832724   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:52:35.843898   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.855726   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.873592   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:52:35.884110   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:52:35.893791   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:52:35.893856   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:52:35.906807   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:52:35.916973   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:52:36.035527   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:52:36.128791   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:52:36.128861   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:52:36.133474   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:52:36.133527   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:52:36.137009   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:52:36.176502   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:52:36.176587   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:52:36.204178   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:52:36.234043   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:52:36.235621   24995 out.go:177]   - env NO_PROXY=192.168.39.234
	I0923 10:52:36.236738   24995 main.go:141] libmachine: (ha-790780-m02) Calling .GetIP
	I0923 10:52:36.239083   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:36.239451   24995 main.go:141] libmachine: (ha-790780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:fc:60", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:52:21 +0000 UTC Type:0 Mac:52:54:00:6f:fc:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-790780-m02 Clientid:01:52:54:00:6f:fc:60}
	I0923 10:52:36.239480   24995 main.go:141] libmachine: (ha-790780-m02) DBG | domain ha-790780-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:6f:fc:60 in network mk-ha-790780
	I0923 10:52:36.239678   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:52:36.243606   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:52:36.255882   24995 mustload.go:65] Loading cluster: ha-790780
	I0923 10:52:36.256081   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:52:36.256374   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:36.256416   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:36.270776   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0923 10:52:36.271240   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:36.271692   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:36.271718   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:36.271991   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:36.272238   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:52:36.273724   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:36.274034   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:36.274069   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:36.288288   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0923 10:52:36.288706   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:36.289138   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:36.289156   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:36.289414   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:36.289558   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:36.289677   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.43
	I0923 10:52:36.289688   24995 certs.go:194] generating shared ca certs ...
	I0923 10:52:36.289705   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.289819   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:52:36.289854   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:52:36.289863   24995 certs.go:256] generating profile certs ...
	I0923 10:52:36.289959   24995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:52:36.289984   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0
	I0923 10:52:36.289997   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.43 192.168.39.254]
	I0923 10:52:36.380163   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 ...
	I0923 10:52:36.380191   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0: {Name:mkcca314f563c49b9f271f2aa6db3e6f62b32cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.380347   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0 ...
	I0923 10:52:36.380359   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0: {Name:mkec241aeb6bb82c01cd41cf66da0be3a70fdccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:52:36.380434   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.b2c775e0 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:52:36.380560   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.b2c775e0 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:52:36.380681   24995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:52:36.380695   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:52:36.380707   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:52:36.380720   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:52:36.380735   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:52:36.380747   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:52:36.380759   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:52:36.380771   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:52:36.380783   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:52:36.380831   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:52:36.380860   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:52:36.380869   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:52:36.380891   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:52:36.380911   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:52:36.380932   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:52:36.380968   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:52:36.380992   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.381005   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.381017   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:52:36.381045   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:36.384036   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:36.384404   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:36.384430   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:36.384577   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:36.384750   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:36.384881   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:36.384987   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:36.457700   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 10:52:36.466345   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 10:52:36.478344   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 10:52:36.483561   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 10:52:36.494070   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 10:52:36.498527   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 10:52:36.509289   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 10:52:36.514499   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 10:52:36.524608   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 10:52:36.528591   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 10:52:36.538971   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 10:52:36.542839   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0923 10:52:36.553841   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:52:36.579371   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:52:36.604546   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:52:36.628677   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:52:36.653097   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 10:52:36.680685   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:52:36.705242   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:52:36.729370   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:52:36.752651   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:52:36.776422   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:52:36.799568   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:52:36.823834   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 10:52:36.840782   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 10:52:36.857346   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 10:52:36.873712   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 10:52:36.889839   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 10:52:36.905626   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0923 10:52:36.921660   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 10:52:36.938136   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:52:36.943716   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:52:36.953982   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.958476   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.958521   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:52:36.964147   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:52:36.974525   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:52:36.985437   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.989845   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.989893   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:52:36.995312   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:52:37.005409   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:52:37.015583   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.019922   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.019974   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:52:37.025448   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:52:37.035595   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:52:37.039362   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:52:37.039415   24995 kubeadm.go:934] updating node {m02 192.168.39.43 8443 v1.31.1 crio true true} ...
	I0923 10:52:37.039492   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:52:37.039513   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:52:37.039552   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:52:37.055529   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:52:37.055596   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:52:37.055650   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:52:37.065414   24995 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:52:37.065472   24995 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:52:37.075491   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:52:37.075506   24995 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0923 10:52:37.075520   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:52:37.075497   24995 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0923 10:52:37.075574   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:52:37.080294   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 10:52:37.080325   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:52:38.529041   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:52:38.529117   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:52:38.533986   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 10:52:38.534028   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:52:39.337289   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:52:39.353663   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:52:39.353773   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:52:39.358145   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 10:52:39.358182   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:52:39.672771   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 10:52:39.682637   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 10:52:39.699260   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:52:39.715572   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 10:52:39.732521   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:52:39.736488   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:52:39.748539   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:52:39.875794   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:52:39.893533   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:52:39.893887   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:52:39.893927   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:52:39.908489   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45729
	I0923 10:52:39.908913   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:52:39.909435   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:52:39.909466   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:52:39.909786   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:52:39.909988   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:52:39.910172   24995 start.go:317] joinCluster: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:52:39.910308   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 10:52:39.910342   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:52:39.913308   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:39.913748   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:52:39.913778   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:52:39.913955   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:52:39.914131   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:52:39.914260   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:52:39.914383   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:52:40.061073   24995 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:52:40.061122   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d9ei0t.d7gczbf91ghyxy4a --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I0923 10:53:01.101827   24995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d9ei0t.d7gczbf91ghyxy4a --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (21.040673445s)
	I0923 10:53:01.101877   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 10:53:01.765759   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780-m02 minikube.k8s.io/updated_at=2024_09_23T10_53_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=false
	I0923 10:53:01.907605   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-790780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 10:53:02.022219   24995 start.go:319] duration metric: took 22.112042939s to joinCluster
	I0923 10:53:02.022286   24995 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:02.022624   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:02.023699   24995 out.go:177] * Verifying Kubernetes components...
	I0923 10:53:02.024977   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:02.301994   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:53:02.355631   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:53:02.355833   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 10:53:02.355886   24995 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.234:8443
	I0923 10:53:02.356182   24995 node_ready.go:35] waiting up to 6m0s for node "ha-790780-m02" to be "Ready" ...
	I0923 10:53:02.356275   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:02.356282   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:02.356289   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:02.356293   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:02.365629   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:53:02.856673   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:02.856694   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:02.856703   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:02.856706   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:02.865889   24995 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:53:03.356651   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:03.356671   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:03.356680   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:03.356687   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:03.363168   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:53:03.857045   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:03.857073   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:03.857084   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:03.857090   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:03.860890   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:04.356575   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:04.356597   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:04.356604   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:04.356608   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:04.359661   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:04.360223   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:04.856507   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:04.856529   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:04.856537   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:04.856540   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:04.860119   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:05.356700   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:05.356722   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:05.356728   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:05.356733   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:05.360476   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:05.856749   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:05.856773   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:05.856781   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:05.856784   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:05.860556   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:06.356805   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:06.356825   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:06.356833   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:06.356837   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:06.359991   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:06.361007   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:06.857386   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:06.857410   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:06.857422   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:06.857428   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:06.860894   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:07.357257   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:07.357281   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:07.357291   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:07.357296   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:07.361346   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:07.856430   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:07.856457   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:07.856468   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:07.856475   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:07.860130   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:08.357367   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:08.357402   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:08.357416   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:08.357422   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:08.360772   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:08.361285   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:08.856627   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:08.856648   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:08.856656   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:08.856661   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:08.860220   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:09.357037   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:09.357059   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:09.357070   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:09.357075   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:09.360298   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:09.857427   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:09.857457   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:09.857469   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:09.857474   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:09.860786   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:10.357151   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:10.357171   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:10.357180   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:10.357183   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:10.360916   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:10.362707   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:10.857145   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:10.857166   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:10.857174   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:10.857178   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:10.861809   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:11.356801   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:11.356822   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:11.356830   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:11.356834   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:11.360464   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:11.856414   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:11.856436   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:11.856447   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:11.856450   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:11.859649   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.357058   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:12.357081   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:12.357088   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:12.357092   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:12.361042   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.857390   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:12.857414   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:12.857424   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:12.857428   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:12.861016   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:12.861719   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:13.357113   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:13.357138   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:13.357150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:13.357155   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:13.360431   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:13.857223   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:13.857243   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:13.857251   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:13.857255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:13.860307   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:14.357308   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:14.357331   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:14.357339   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:14.357342   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:14.361127   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:14.856952   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:14.856977   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:14.856987   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:14.856992   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:14.860782   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:15.356456   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:15.356485   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:15.356496   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:15.356502   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:15.359792   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:15.360494   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:15.856872   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:15.856897   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:15.856907   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:15.856912   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:15.860634   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:16.356764   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:16.356786   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:16.356793   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:16.356798   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:16.360240   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:16.856427   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:16.856454   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:16.856466   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:16.856472   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:16.860397   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:17.356784   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:17.356806   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:17.356814   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:17.356819   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:17.360664   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:17.361536   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:17.856878   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:17.856902   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:17.856910   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:17.856915   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:17.860694   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:18.356716   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:18.356739   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:18.356746   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:18.356750   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:18.360583   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:18.856463   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:18.856487   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:18.856495   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:18.856502   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:18.860301   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:19.356990   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:19.357018   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:19.357028   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:19.357031   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:19.361547   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:19.362649   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:19.857046   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:19.857065   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:19.857073   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:19.857077   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:19.860596   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:20.357289   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:20.357312   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:20.357321   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:20.357326   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:20.361074   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:20.857154   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:20.857178   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:20.857186   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:20.857190   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:20.860563   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:21.357410   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.357434   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.357445   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.357449   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.362160   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:21.362767   24995 node_ready.go:53] node "ha-790780-m02" has status "Ready":"False"
	I0923 10:53:21.857033   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.857057   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.857065   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.857071   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.860457   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:21.860908   24995 node_ready.go:49] node "ha-790780-m02" has status "Ready":"True"
	I0923 10:53:21.860928   24995 node_ready.go:38] duration metric: took 19.504727616s for node "ha-790780-m02" to be "Ready" ...
	I0923 10:53:21.860937   24995 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:53:21.861016   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:21.861026   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.861033   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.861037   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.865124   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:21.870946   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.871015   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bsbth
	I0923 10:53:21.871023   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.871030   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.871035   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.873727   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.874362   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.874375   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.874383   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.874386   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.876630   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.877063   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.877077   24995 pod_ready.go:82] duration metric: took 6.11171ms for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.877085   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.877131   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-vzhrs
	I0923 10:53:21.877139   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.877145   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.877148   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.879422   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.879947   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.879959   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.879966   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.879971   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.881756   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:53:21.882229   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.882243   24995 pod_ready.go:82] duration metric: took 5.151724ms for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.882250   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.882288   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780
	I0923 10:53:21.882295   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.882301   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.882305   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.884597   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.885566   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:21.885580   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.885587   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.885590   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.887691   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.888066   24995 pod_ready.go:93] pod "etcd-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.888081   24995 pod_ready.go:82] duration metric: took 5.825391ms for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.888088   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.888136   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m02
	I0923 10:53:21.888144   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.888150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.888154   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.890206   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:21.890675   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:21.890689   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:21.890699   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:21.890706   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:21.892638   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:53:21.892989   24995 pod_ready.go:93] pod "etcd-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:21.893005   24995 pod_ready.go:82] duration metric: took 4.911284ms for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:21.893019   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.057496   24995 request.go:632] Waited for 164.405368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:53:22.057558   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:53:22.057562   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.057569   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.057573   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.061586   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.257674   24995 request.go:632] Waited for 195.391664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:22.257753   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:22.257761   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.257768   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.257772   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.260869   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.261571   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:22.261592   24995 pod_ready.go:82] duration metric: took 368.566383ms for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.261602   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.457665   24995 request.go:632] Waited for 195.996413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:53:22.457743   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:53:22.457752   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.457762   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.457769   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.463274   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:53:22.657157   24995 request.go:632] Waited for 193.295869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:22.657236   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:22.657245   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.657255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.657261   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.661000   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:22.661818   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:22.661846   24995 pod_ready.go:82] duration metric: took 400.236588ms for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.661858   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:22.857792   24995 request.go:632] Waited for 195.86636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:53:22.857859   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:53:22.857865   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:22.857872   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:22.857878   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:22.861662   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.057689   24995 request.go:632] Waited for 195.383255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.057812   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.057824   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.057834   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.057838   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.061339   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.062080   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.062106   24995 pod_ready.go:82] duration metric: took 400.238848ms for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.062119   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.257074   24995 request.go:632] Waited for 194.846773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:53:23.257139   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:53:23.257144   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.257154   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.257159   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.261117   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.457215   24995 request.go:632] Waited for 195.281467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:23.457266   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:23.457271   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.457280   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.457285   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.460410   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.460927   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.460946   24995 pod_ready.go:82] duration metric: took 398.811897ms for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.460959   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.657058   24995 request.go:632] Waited for 196.030311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:53:23.657133   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:53:23.657142   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.657151   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.657160   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.660449   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.857439   24995 request.go:632] Waited for 196.364612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.857511   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:23.857517   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:23.857524   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:23.857528   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:23.861085   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:23.861628   24995 pod_ready.go:93] pod "kube-proxy-jqwtw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:23.861646   24995 pod_ready.go:82] duration metric: took 400.678998ms for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:23.861658   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.057696   24995 request.go:632] Waited for 195.97414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:53:24.057780   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:53:24.057788   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.057803   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.057811   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.061523   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.257819   24995 request.go:632] Waited for 195.359423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:24.257886   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:24.257891   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.257898   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.257903   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.260794   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:53:24.261474   24995 pod_ready.go:93] pod "kube-proxy-x8fb6" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:24.261495   24995 pod_ready.go:82] duration metric: took 399.829683ms for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.261504   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.457623   24995 request.go:632] Waited for 196.060511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:53:24.457720   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:53:24.457731   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.457743   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.457754   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.461018   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.657050   24995 request.go:632] Waited for 195.289482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:24.657104   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:53:24.657112   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.657119   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.657123   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.660508   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:24.661074   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:24.661111   24995 pod_ready.go:82] duration metric: took 399.600186ms for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.661130   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:24.857061   24995 request.go:632] Waited for 195.872756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:53:24.857130   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:53:24.857135   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:24.857142   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:24.857146   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:24.860206   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.057515   24995 request.go:632] Waited for 196.490026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:25.057567   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:53:25.057572   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.057579   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.057584   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.060963   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.061666   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:53:25.061685   24995 pod_ready.go:82] duration metric: took 400.549015ms for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:53:25.061695   24995 pod_ready.go:39] duration metric: took 3.200747429s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:53:25.061708   24995 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:53:25.061767   24995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:53:25.081513   24995 api_server.go:72] duration metric: took 23.059195196s to wait for apiserver process to appear ...
	I0923 10:53:25.081540   24995 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:53:25.081558   24995 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0923 10:53:25.085813   24995 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0923 10:53:25.085884   24995 round_trippers.go:463] GET https://192.168.39.234:8443/version
	I0923 10:53:25.085897   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.085907   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.085914   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.086702   24995 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 10:53:25.086786   24995 api_server.go:141] control plane version: v1.31.1
	I0923 10:53:25.086800   24995 api_server.go:131] duration metric: took 5.254846ms to wait for apiserver health ...
	I0923 10:53:25.086810   24995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:53:25.257145   24995 request.go:632] Waited for 170.272303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.257205   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.257212   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.257236   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.257246   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.262177   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.267069   24995 system_pods.go:59] 17 kube-system pods found
	I0923 10:53:25.267104   24995 system_pods.go:61] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:53:25.267110   24995 system_pods.go:61] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:53:25.267114   24995 system_pods.go:61] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:53:25.267119   24995 system_pods.go:61] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:53:25.267122   24995 system_pods.go:61] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:53:25.267125   24995 system_pods.go:61] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:53:25.267129   24995 system_pods.go:61] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:53:25.267132   24995 system_pods.go:61] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:53:25.267135   24995 system_pods.go:61] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:53:25.267139   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:53:25.267147   24995 system_pods.go:61] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:53:25.267153   24995 system_pods.go:61] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:53:25.267156   24995 system_pods.go:61] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:53:25.267159   24995 system_pods.go:61] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:53:25.267162   24995 system_pods.go:61] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:53:25.267165   24995 system_pods.go:61] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:53:25.267168   24995 system_pods.go:61] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:53:25.267174   24995 system_pods.go:74] duration metric: took 180.359181ms to wait for pod list to return data ...
	I0923 10:53:25.267183   24995 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:53:25.457458   24995 request.go:632] Waited for 190.183499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:53:25.457513   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:53:25.457518   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.457524   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.457529   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.461448   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:53:25.461660   24995 default_sa.go:45] found service account: "default"
	I0923 10:53:25.461673   24995 default_sa.go:55] duration metric: took 194.484894ms for default service account to be created ...
	I0923 10:53:25.461682   24995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:53:25.657106   24995 request.go:632] Waited for 195.349388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.657170   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:53:25.657177   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.657185   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.657189   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.661432   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.665847   24995 system_pods.go:86] 17 kube-system pods found
	I0923 10:53:25.665873   24995 system_pods.go:89] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:53:25.665880   24995 system_pods.go:89] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:53:25.665884   24995 system_pods.go:89] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:53:25.665888   24995 system_pods.go:89] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:53:25.665891   24995 system_pods.go:89] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:53:25.665895   24995 system_pods.go:89] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:53:25.665898   24995 system_pods.go:89] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:53:25.665902   24995 system_pods.go:89] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:53:25.665905   24995 system_pods.go:89] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:53:25.665909   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:53:25.665912   24995 system_pods.go:89] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:53:25.665915   24995 system_pods.go:89] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:53:25.665918   24995 system_pods.go:89] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:53:25.665922   24995 system_pods.go:89] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:53:25.665925   24995 system_pods.go:89] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:53:25.665928   24995 system_pods.go:89] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:53:25.665930   24995 system_pods.go:89] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:53:25.665936   24995 system_pods.go:126] duration metric: took 204.248587ms to wait for k8s-apps to be running ...
	I0923 10:53:25.665944   24995 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:53:25.665984   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:53:25.684789   24995 system_svc.go:56] duration metric: took 18.833844ms WaitForService to wait for kubelet
	I0923 10:53:25.684821   24995 kubeadm.go:582] duration metric: took 23.662507551s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:53:25.684838   24995 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:53:25.857256   24995 request.go:632] Waited for 172.290601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes
	I0923 10:53:25.857312   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes
	I0923 10:53:25.857319   24995 round_trippers.go:469] Request Headers:
	I0923 10:53:25.857330   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:53:25.857337   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:53:25.861630   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:53:25.862368   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:53:25.862410   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:53:25.862427   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:53:25.862432   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:53:25.862438   24995 node_conditions.go:105] duration metric: took 177.594557ms to run NodePressure ...
	I0923 10:53:25.862459   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:53:25.862493   24995 start.go:255] writing updated cluster config ...
	I0923 10:53:25.865563   24995 out.go:201] 
	I0923 10:53:25.867057   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:25.867172   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:25.868777   24995 out.go:177] * Starting "ha-790780-m03" control-plane node in "ha-790780" cluster
	I0923 10:53:25.870020   24995 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:53:25.870049   24995 cache.go:56] Caching tarball of preloaded images
	I0923 10:53:25.870173   24995 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:53:25.870184   24995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:53:25.870283   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:25.870479   24995 start.go:360] acquireMachinesLock for ha-790780-m03: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:53:25.870521   24995 start.go:364] duration metric: took 24.387µs to acquireMachinesLock for "ha-790780-m03"
	I0923 10:53:25.870535   24995 start.go:93] Provisioning new machine with config: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:25.870632   24995 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0923 10:53:25.871978   24995 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 10:53:25.872058   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:25.872097   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:25.887083   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0923 10:53:25.887502   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:25.887952   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:25.887969   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:25.888292   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:25.888496   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:25.888647   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:25.888772   24995 start.go:159] libmachine.API.Create for "ha-790780" (driver="kvm2")
	I0923 10:53:25.888800   24995 client.go:168] LocalClient.Create starting
	I0923 10:53:25.888829   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 10:53:25.888863   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:53:25.888888   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:53:25.888936   24995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 10:53:25.888954   24995 main.go:141] libmachine: Decoding PEM data...
	I0923 10:53:25.888964   24995 main.go:141] libmachine: Parsing certificate...
	I0923 10:53:25.888978   24995 main.go:141] libmachine: Running pre-create checks...
	I0923 10:53:25.888986   24995 main.go:141] libmachine: (ha-790780-m03) Calling .PreCreateCheck
	I0923 10:53:25.889134   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:25.889504   24995 main.go:141] libmachine: Creating machine...
	I0923 10:53:25.889516   24995 main.go:141] libmachine: (ha-790780-m03) Calling .Create
	I0923 10:53:25.889669   24995 main.go:141] libmachine: (ha-790780-m03) Creating KVM machine...
	I0923 10:53:25.890855   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found existing default KVM network
	I0923 10:53:25.890969   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found existing private KVM network mk-ha-790780
	I0923 10:53:25.891095   24995 main.go:141] libmachine: (ha-790780-m03) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 ...
	I0923 10:53:25.891119   24995 main.go:141] libmachine: (ha-790780-m03) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:53:25.891198   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:25.891096   25778 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:53:25.891276   24995 main.go:141] libmachine: (ha-790780-m03) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 10:53:26.119663   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.119526   25778 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa...
	I0923 10:53:26.169862   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.169746   25778 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/ha-790780-m03.rawdisk...
	I0923 10:53:26.169897   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Writing magic tar header
	I0923 10:53:26.169907   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Writing SSH key tar header
	I0923 10:53:26.169915   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:26.169856   25778 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 ...
	I0923 10:53:26.169932   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03
	I0923 10:53:26.169988   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03 (perms=drwx------)
	I0923 10:53:26.170004   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 10:53:26.170016   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 10:53:26.170030   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 10:53:26.170039   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 10:53:26.170046   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 10:53:26.170054   24995 main.go:141] libmachine: (ha-790780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 10:53:26.170064   24995 main.go:141] libmachine: (ha-790780-m03) Creating domain...
	I0923 10:53:26.170078   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:53:26.170094   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 10:53:26.170131   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 10:53:26.170142   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home/jenkins
	I0923 10:53:26.170148   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Checking permissions on dir: /home
	I0923 10:53:26.170153   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Skipping /home - not owner
	I0923 10:53:26.171065   24995 main.go:141] libmachine: (ha-790780-m03) define libvirt domain using xml: 
	I0923 10:53:26.171093   24995 main.go:141] libmachine: (ha-790780-m03) <domain type='kvm'>
	I0923 10:53:26.171101   24995 main.go:141] libmachine: (ha-790780-m03)   <name>ha-790780-m03</name>
	I0923 10:53:26.171112   24995 main.go:141] libmachine: (ha-790780-m03)   <memory unit='MiB'>2200</memory>
	I0923 10:53:26.171120   24995 main.go:141] libmachine: (ha-790780-m03)   <vcpu>2</vcpu>
	I0923 10:53:26.171126   24995 main.go:141] libmachine: (ha-790780-m03)   <features>
	I0923 10:53:26.171134   24995 main.go:141] libmachine: (ha-790780-m03)     <acpi/>
	I0923 10:53:26.171144   24995 main.go:141] libmachine: (ha-790780-m03)     <apic/>
	I0923 10:53:26.171152   24995 main.go:141] libmachine: (ha-790780-m03)     <pae/>
	I0923 10:53:26.171161   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171166   24995 main.go:141] libmachine: (ha-790780-m03)   </features>
	I0923 10:53:26.171171   24995 main.go:141] libmachine: (ha-790780-m03)   <cpu mode='host-passthrough'>
	I0923 10:53:26.171175   24995 main.go:141] libmachine: (ha-790780-m03)   
	I0923 10:53:26.171184   24995 main.go:141] libmachine: (ha-790780-m03)   </cpu>
	I0923 10:53:26.171200   24995 main.go:141] libmachine: (ha-790780-m03)   <os>
	I0923 10:53:26.171209   24995 main.go:141] libmachine: (ha-790780-m03)     <type>hvm</type>
	I0923 10:53:26.171218   24995 main.go:141] libmachine: (ha-790780-m03)     <boot dev='cdrom'/>
	I0923 10:53:26.171235   24995 main.go:141] libmachine: (ha-790780-m03)     <boot dev='hd'/>
	I0923 10:53:26.171247   24995 main.go:141] libmachine: (ha-790780-m03)     <bootmenu enable='no'/>
	I0923 10:53:26.171256   24995 main.go:141] libmachine: (ha-790780-m03)   </os>
	I0923 10:53:26.171264   24995 main.go:141] libmachine: (ha-790780-m03)   <devices>
	I0923 10:53:26.171272   24995 main.go:141] libmachine: (ha-790780-m03)     <disk type='file' device='cdrom'>
	I0923 10:53:26.171284   24995 main.go:141] libmachine: (ha-790780-m03)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/boot2docker.iso'/>
	I0923 10:53:26.171294   24995 main.go:141] libmachine: (ha-790780-m03)       <target dev='hdc' bus='scsi'/>
	I0923 10:53:26.171302   24995 main.go:141] libmachine: (ha-790780-m03)       <readonly/>
	I0923 10:53:26.171311   24995 main.go:141] libmachine: (ha-790780-m03)     </disk>
	I0923 10:53:26.171321   24995 main.go:141] libmachine: (ha-790780-m03)     <disk type='file' device='disk'>
	I0923 10:53:26.171336   24995 main.go:141] libmachine: (ha-790780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 10:53:26.171351   24995 main.go:141] libmachine: (ha-790780-m03)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/ha-790780-m03.rawdisk'/>
	I0923 10:53:26.171361   24995 main.go:141] libmachine: (ha-790780-m03)       <target dev='hda' bus='virtio'/>
	I0923 10:53:26.171367   24995 main.go:141] libmachine: (ha-790780-m03)     </disk>
	I0923 10:53:26.171378   24995 main.go:141] libmachine: (ha-790780-m03)     <interface type='network'>
	I0923 10:53:26.171390   24995 main.go:141] libmachine: (ha-790780-m03)       <source network='mk-ha-790780'/>
	I0923 10:53:26.171401   24995 main.go:141] libmachine: (ha-790780-m03)       <model type='virtio'/>
	I0923 10:53:26.171412   24995 main.go:141] libmachine: (ha-790780-m03)     </interface>
	I0923 10:53:26.171422   24995 main.go:141] libmachine: (ha-790780-m03)     <interface type='network'>
	I0923 10:53:26.171430   24995 main.go:141] libmachine: (ha-790780-m03)       <source network='default'/>
	I0923 10:53:26.171439   24995 main.go:141] libmachine: (ha-790780-m03)       <model type='virtio'/>
	I0923 10:53:26.171447   24995 main.go:141] libmachine: (ha-790780-m03)     </interface>
	I0923 10:53:26.171455   24995 main.go:141] libmachine: (ha-790780-m03)     <serial type='pty'>
	I0923 10:53:26.171462   24995 main.go:141] libmachine: (ha-790780-m03)       <target port='0'/>
	I0923 10:53:26.171471   24995 main.go:141] libmachine: (ha-790780-m03)     </serial>
	I0923 10:53:26.171479   24995 main.go:141] libmachine: (ha-790780-m03)     <console type='pty'>
	I0923 10:53:26.171490   24995 main.go:141] libmachine: (ha-790780-m03)       <target type='serial' port='0'/>
	I0923 10:53:26.171499   24995 main.go:141] libmachine: (ha-790780-m03)     </console>
	I0923 10:53:26.171508   24995 main.go:141] libmachine: (ha-790780-m03)     <rng model='virtio'>
	I0923 10:53:26.171518   24995 main.go:141] libmachine: (ha-790780-m03)       <backend model='random'>/dev/random</backend>
	I0923 10:53:26.171530   24995 main.go:141] libmachine: (ha-790780-m03)     </rng>
	I0923 10:53:26.171537   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171544   24995 main.go:141] libmachine: (ha-790780-m03)     
	I0923 10:53:26.171555   24995 main.go:141] libmachine: (ha-790780-m03)   </devices>
	I0923 10:53:26.171565   24995 main.go:141] libmachine: (ha-790780-m03) </domain>
	I0923 10:53:26.171575   24995 main.go:141] libmachine: (ha-790780-m03) 
	I0923 10:53:26.178380   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:72:76:7a in network default
	I0923 10:53:26.178970   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring networks are active...
	I0923 10:53:26.178994   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:26.179728   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring network default is active
	I0923 10:53:26.180047   24995 main.go:141] libmachine: (ha-790780-m03) Ensuring network mk-ha-790780 is active
	I0923 10:53:26.180480   24995 main.go:141] libmachine: (ha-790780-m03) Getting domain xml...
	I0923 10:53:26.181303   24995 main.go:141] libmachine: (ha-790780-m03) Creating domain...
	I0923 10:53:27.415592   24995 main.go:141] libmachine: (ha-790780-m03) Waiting to get IP...
	I0923 10:53:27.416244   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:27.416680   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:27.416705   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:27.416654   25778 retry.go:31] will retry after 301.241192ms: waiting for machine to come up
	I0923 10:53:27.719304   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:27.719799   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:27.719822   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:27.719765   25778 retry.go:31] will retry after 352.048049ms: waiting for machine to come up
	I0923 10:53:28.073266   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.073729   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.073755   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.073678   25778 retry.go:31] will retry after 446.737236ms: waiting for machine to come up
	I0923 10:53:28.522311   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.522758   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.522785   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.522723   25778 retry.go:31] will retry after 430.883485ms: waiting for machine to come up
	I0923 10:53:28.955161   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:28.955610   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:28.955632   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:28.955571   25778 retry.go:31] will retry after 596.158416ms: waiting for machine to come up
	I0923 10:53:29.553342   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:29.553790   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:29.553817   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:29.553738   25778 retry.go:31] will retry after 730.070516ms: waiting for machine to come up
	I0923 10:53:30.285659   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:30.286131   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:30.286157   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:30.286040   25778 retry.go:31] will retry after 880.584916ms: waiting for machine to come up
	I0923 10:53:31.168589   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:31.169030   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:31.169056   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:31.168976   25778 retry.go:31] will retry after 1.090798092s: waiting for machine to come up
	I0923 10:53:32.261334   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:32.261824   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:32.261851   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:32.261785   25778 retry.go:31] will retry after 1.772470281s: waiting for machine to come up
	I0923 10:53:34.036802   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:34.037280   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:34.037304   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:34.037244   25778 retry.go:31] will retry after 2.114432637s: waiting for machine to come up
	I0923 10:53:36.153777   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:36.154260   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:36.154287   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:36.154219   25778 retry.go:31] will retry after 2.408325817s: waiting for machine to come up
	I0923 10:53:38.564571   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:38.565093   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:38.565130   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:38.565046   25778 retry.go:31] will retry after 2.326260729s: waiting for machine to come up
	I0923 10:53:40.892782   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:40.893136   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:40.893165   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:40.893117   25778 retry.go:31] will retry after 4.498444105s: waiting for machine to come up
	I0923 10:53:45.396707   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:45.397269   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find current IP address of domain ha-790780-m03 in network mk-ha-790780
	I0923 10:53:45.397291   24995 main.go:141] libmachine: (ha-790780-m03) DBG | I0923 10:53:45.397229   25778 retry.go:31] will retry after 3.781853522s: waiting for machine to come up
	I0923 10:53:49.183061   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.183495   24995 main.go:141] libmachine: (ha-790780-m03) Found IP for machine: 192.168.39.128
	I0923 10:53:49.183516   24995 main.go:141] libmachine: (ha-790780-m03) Reserving static IP address...
	I0923 10:53:49.183525   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has current primary IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.183927   24995 main.go:141] libmachine: (ha-790780-m03) DBG | unable to find host DHCP lease matching {name: "ha-790780-m03", mac: "52:54:00:da:88:d2", ip: "192.168.39.128"} in network mk-ha-790780
	I0923 10:53:49.254082   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Getting to WaitForSSH function...
	I0923 10:53:49.254113   24995 main.go:141] libmachine: (ha-790780-m03) Reserved static IP address: 192.168.39.128
	I0923 10:53:49.254149   24995 main.go:141] libmachine: (ha-790780-m03) Waiting for SSH to be available...
	I0923 10:53:49.256671   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.257072   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.257129   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.257268   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using SSH client type: external
	I0923 10:53:49.257291   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa (-rw-------)
	I0923 10:53:49.257308   24995 main.go:141] libmachine: (ha-790780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 10:53:49.257317   24995 main.go:141] libmachine: (ha-790780-m03) DBG | About to run SSH command:
	I0923 10:53:49.257331   24995 main.go:141] libmachine: (ha-790780-m03) DBG | exit 0
	I0923 10:53:49.381472   24995 main.go:141] libmachine: (ha-790780-m03) DBG | SSH cmd err, output: <nil>: 
	I0923 10:53:49.381777   24995 main.go:141] libmachine: (ha-790780-m03) KVM machine creation complete!
	I0923 10:53:49.382107   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:49.382695   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:49.382878   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:49.383011   24995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 10:53:49.383024   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetState
	I0923 10:53:49.384376   24995 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 10:53:49.384391   24995 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 10:53:49.384397   24995 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 10:53:49.384405   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.386759   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.387147   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.387171   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.387306   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.387467   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.387589   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.387701   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.387847   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.388073   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.388086   24995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 10:53:49.488864   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:53:49.488884   24995 main.go:141] libmachine: Detecting the provisioner...
	I0923 10:53:49.488892   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.491596   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.491978   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.492008   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.492099   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.492277   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.492427   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.492526   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.492704   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.492876   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.492888   24995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 10:53:49.598720   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 10:53:49.598811   24995 main.go:141] libmachine: found compatible host: buildroot
	I0923 10:53:49.599353   24995 main.go:141] libmachine: Provisioning with buildroot...
	I0923 10:53:49.599372   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.599616   24995 buildroot.go:166] provisioning hostname "ha-790780-m03"
	I0923 10:53:49.599639   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.599803   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.602122   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.602493   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.602532   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.602649   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.602826   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.602949   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.603164   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.603352   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.603516   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.603528   24995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780-m03 && echo "ha-790780-m03" | sudo tee /etc/hostname
	I0923 10:53:49.721012   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780-m03
	
	I0923 10:53:49.721052   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.723652   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.723993   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.724019   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.724168   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:49.724322   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.724468   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:49.724607   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:49.724760   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:49.724931   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:49.724946   24995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:53:49.840094   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:53:49.840118   24995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 10:53:49.840133   24995 buildroot.go:174] setting up certificates
	I0923 10:53:49.840143   24995 provision.go:84] configureAuth start
	I0923 10:53:49.840153   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetMachineName
	I0923 10:53:49.840425   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:49.842798   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.843203   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.843398   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.843425   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:49.846675   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.846978   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:49.847001   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:49.847165   24995 provision.go:143] copyHostCerts
	I0923 10:53:49.847199   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:53:49.847229   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 10:53:49.847237   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 10:53:49.847304   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 10:53:49.847373   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:53:49.847390   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 10:53:49.847395   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 10:53:49.847418   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 10:53:49.847462   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:53:49.847478   24995 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 10:53:49.847484   24995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 10:53:49.847505   24995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 10:53:49.847551   24995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780-m03 san=[127.0.0.1 192.168.39.128 ha-790780-m03 localhost minikube]
	I0923 10:53:50.272155   24995 provision.go:177] copyRemoteCerts
	I0923 10:53:50.272213   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:53:50.272235   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.275051   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.275585   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.275610   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.275867   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.276099   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.276265   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.276390   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.359884   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 10:53:50.359964   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 10:53:50.385147   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 10:53:50.385241   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:53:50.408651   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 10:53:50.408716   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 10:53:50.435874   24995 provision.go:87] duration metric: took 595.718111ms to configureAuth
	I0923 10:53:50.435900   24995 buildroot.go:189] setting minikube options for container-runtime
	I0923 10:53:50.436094   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:50.436172   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.438683   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.439106   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.439127   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.439321   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.439488   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.439634   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.439746   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.439894   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:50.440051   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:50.440064   24995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:53:50.684672   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:53:50.684697   24995 main.go:141] libmachine: Checking connection to Docker...
	I0923 10:53:50.684703   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetURL
	I0923 10:53:50.686020   24995 main.go:141] libmachine: (ha-790780-m03) DBG | Using libvirt version 6000000
	I0923 10:53:50.688488   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.688853   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.688879   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.689108   24995 main.go:141] libmachine: Docker is up and running!
	I0923 10:53:50.689121   24995 main.go:141] libmachine: Reticulating splines...
	I0923 10:53:50.689127   24995 client.go:171] duration metric: took 24.800318648s to LocalClient.Create
	I0923 10:53:50.689151   24995 start.go:167] duration metric: took 24.800381017s to libmachine.API.Create "ha-790780"
	I0923 10:53:50.689159   24995 start.go:293] postStartSetup for "ha-790780-m03" (driver="kvm2")
	I0923 10:53:50.689169   24995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:53:50.689184   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.689440   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:53:50.689461   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.691514   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.691815   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.691839   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.692003   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.692169   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.692285   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.692465   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.777980   24995 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:53:50.782722   24995 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 10:53:50.782745   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 10:53:50.782841   24995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 10:53:50.782921   24995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 10:53:50.782934   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 10:53:50.783049   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 10:53:50.794032   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:53:50.818235   24995 start.go:296] duration metric: took 129.060416ms for postStartSetup
	I0923 10:53:50.818300   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetConfigRaw
	I0923 10:53:50.818861   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:50.821701   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.822078   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.822100   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.822411   24995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:53:50.822611   24995 start.go:128] duration metric: took 24.951969783s to createHost
	I0923 10:53:50.822632   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.824818   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.825087   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.825104   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.825227   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.825431   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.825587   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.825708   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.825886   24995 main.go:141] libmachine: Using SSH client type: native
	I0923 10:53:50.826038   24995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0923 10:53:50.826050   24995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 10:53:50.930070   24995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727088830.907721483
	
	I0923 10:53:50.930099   24995 fix.go:216] guest clock: 1727088830.907721483
	I0923 10:53:50.930110   24995 fix.go:229] Guest: 2024-09-23 10:53:50.907721483 +0000 UTC Remote: 2024-09-23 10:53:50.822622208 +0000 UTC m=+146.966414831 (delta=85.099275ms)
	I0923 10:53:50.930129   24995 fix.go:200] guest clock delta is within tolerance: 85.099275ms
	I0923 10:53:50.930136   24995 start.go:83] releasing machines lock for "ha-790780-m03", held for 25.059606586s
	I0923 10:53:50.930159   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.930413   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:50.933262   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.933632   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.933662   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.936077   24995 out.go:177] * Found network options:
	I0923 10:53:50.937456   24995 out.go:177]   - NO_PROXY=192.168.39.234,192.168.39.43
	W0923 10:53:50.938766   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 10:53:50.938786   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:53:50.938798   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939303   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939487   24995 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:53:50.939579   24995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:53:50.939619   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	W0923 10:53:50.939635   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 10:53:50.939651   24995 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 10:53:50.939713   24995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:53:50.939736   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:53:50.942522   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.942765   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.942929   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.942950   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.943114   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.943237   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:50.943278   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:50.943281   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.943465   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:53:50.943491   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.943650   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:53:50.943653   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:50.944011   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:53:50.944170   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:53:51.179564   24995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 10:53:51.186418   24995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:53:51.186493   24995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:53:51.205433   24995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:53:51.205455   24995 start.go:495] detecting cgroup driver to use...
	I0923 10:53:51.205519   24995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:53:51.225654   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:53:51.240061   24995 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:53:51.240122   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:53:51.255040   24995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:53:51.270087   24995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:53:51.386340   24995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:53:51.551856   24995 docker.go:233] disabling docker service ...
	I0923 10:53:51.551936   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:53:51.566431   24995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:53:51.579646   24995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:53:51.704084   24995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:53:51.818925   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:53:51.833174   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:53:51.851230   24995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:53:51.851304   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.862780   24995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:53:51.862838   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.874053   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.884749   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.895370   24995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:53:51.906992   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.919902   24995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.938806   24995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:53:51.950285   24995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:53:51.960703   24995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:53:51.960774   24995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:53:51.975701   24995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:53:51.986268   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:52.107292   24995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:53:52.198777   24995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:53:52.198848   24995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:53:52.204135   24995 start.go:563] Will wait 60s for crictl version
	I0923 10:53:52.204184   24995 ssh_runner.go:195] Run: which crictl
	I0923 10:53:52.208403   24995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:53:52.251505   24995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 10:53:52.251599   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:53:52.282350   24995 ssh_runner.go:195] Run: crio --version
	I0923 10:53:52.311799   24995 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 10:53:52.313353   24995 out.go:177]   - env NO_PROXY=192.168.39.234
	I0923 10:53:52.314907   24995 out.go:177]   - env NO_PROXY=192.168.39.234,192.168.39.43
	I0923 10:53:52.316435   24995 main.go:141] libmachine: (ha-790780-m03) Calling .GetIP
	I0923 10:53:52.319158   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:52.319626   24995 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:53:52.319654   24995 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:53:52.319874   24995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 10:53:52.324605   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:53:52.339255   24995 mustload.go:65] Loading cluster: ha-790780
	I0923 10:53:52.339529   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:53:52.339777   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:52.339813   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:52.354195   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0923 10:53:52.354688   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:52.355182   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:52.355203   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:52.355538   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:52.355708   24995 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 10:53:52.357205   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:53:52.357505   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:52.357542   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:52.372762   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0923 10:53:52.373235   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:52.373697   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:52.373716   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:52.374015   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:52.374212   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:53:52.374340   24995 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.128
	I0923 10:53:52.374351   24995 certs.go:194] generating shared ca certs ...
	I0923 10:53:52.374369   24995 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.374504   24995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 10:53:52.374556   24995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 10:53:52.374570   24995 certs.go:256] generating profile certs ...
	I0923 10:53:52.374655   24995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 10:53:52.374693   24995 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6
	I0923 10:53:52.374713   24995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.43 192.168.39.128 192.168.39.254]
	I0923 10:53:52.830596   24995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 ...
	I0923 10:53:52.830630   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6: {Name:mk3da13c3de64b9df293631e361b2c7f1e18faef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.830809   24995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6 ...
	I0923 10:53:52.830824   24995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6: {Name:mk9b5e211aee3a00b4a3121b2b594883d08d2d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:53:52.830919   24995 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.862480c6 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 10:53:52.831074   24995 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.862480c6 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 10:53:52.831254   24995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 10:53:52.831273   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:53:52.831292   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:53:52.831307   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:53:52.831326   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:53:52.831343   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:53:52.831361   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:53:52.831377   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:53:52.845466   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:53:52.845553   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 10:53:52.845615   24995 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 10:53:52.845628   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:53:52.845681   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:53:52.845720   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:53:52.845752   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 10:53:52.845808   24995 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 10:53:52.845849   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 10:53:52.845870   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:52.845888   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 10:53:52.845975   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:53:52.849292   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:52.849803   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:53:52.849833   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:52.849989   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:53:52.850212   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:53:52.850363   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:53:52.850493   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:53:52.925695   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 10:53:52.931543   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 10:53:52.942513   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 10:53:52.947104   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 10:53:52.958388   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 10:53:52.963161   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 10:53:52.974344   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 10:53:52.978586   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 10:53:52.989199   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 10:53:52.993359   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 10:53:53.004532   24995 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 10:53:53.009112   24995 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0923 10:53:53.022998   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:53:53.048580   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:53:53.074022   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:53:53.099377   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:53:53.125775   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0923 10:53:53.149277   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 10:53:53.173416   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:53:53.196002   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:53:53.219585   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 10:53:53.244005   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:53:53.269483   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 10:53:53.294869   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 10:53:53.313037   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 10:53:53.331540   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 10:53:53.349167   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 10:53:53.365721   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 10:53:53.382590   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0923 10:53:53.399048   24995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 10:53:53.415691   24995 ssh_runner.go:195] Run: openssl version
	I0923 10:53:53.421883   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:53:53.432913   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.437536   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.437594   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:53:53.443568   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:53:53.454559   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 10:53:53.466110   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.471977   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.472046   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 10:53:53.478758   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 10:53:53.490184   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 10:53:53.500924   24995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.505855   24995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.505903   24995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 10:53:53.511671   24995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:53:53.523484   24995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:53:53.527585   24995 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:53:53.527642   24995 kubeadm.go:934] updating node {m03 192.168.39.128 8443 v1.31.1 crio true true} ...
	I0923 10:53:53.527721   24995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:53:53.527745   24995 kube-vip.go:115] generating kube-vip config ...
	I0923 10:53:53.527775   24995 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 10:53:53.547465   24995 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 10:53:53.547540   24995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 10:53:53.547608   24995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:53:53.560380   24995 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:53:53.560453   24995 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:53:53.573111   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 10:53:53.573138   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 10:53:53.573159   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:53:53.573166   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:53:53.573188   24995 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:53:53.573217   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:53:53.573226   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:53:53.573267   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:53:53.590633   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 10:53:53.590666   24995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:53:53.590676   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:53:53.590699   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 10:53:53.590727   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:53:53.590760   24995 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:53:53.604722   24995 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 10:53:53.604761   24995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:53:54.451748   24995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 10:53:54.462513   24995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 10:53:54.481654   24995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:53:54.498291   24995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 10:53:54.514964   24995 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 10:53:54.519190   24995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:53:54.531635   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:53:54.654563   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:53:54.675941   24995 host.go:66] Checking if "ha-790780" exists ...
	I0923 10:53:54.676279   24995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:53:54.676323   24995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:53:54.693004   24995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I0923 10:53:54.693496   24995 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:53:54.693939   24995 main.go:141] libmachine: Using API Version  1
	I0923 10:53:54.693961   24995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:53:54.694293   24995 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:53:54.694479   24995 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 10:53:54.694626   24995 start.go:317] joinCluster: &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:53:54.694743   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 10:53:54.694765   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 10:53:54.697460   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:54.697884   24995 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 10:53:54.697912   24995 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 10:53:54.698049   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 10:53:54.698201   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 10:53:54.698349   24995 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 10:53:54.698455   24995 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 10:53:54.854997   24995 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:53:54.855050   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hoy5xs.p8rtt9vlcudv8w5v --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m03 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443"
	I0923 10:54:17.634590   24995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hoy5xs.p8rtt9vlcudv8w5v --discovery-token-ca-cert-hash sha256:e1d2f4f0043ec8c058f8c6dc5130afe31b321e881436326928809de25c1fdff3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-790780-m03 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443": (22.77951683s)
	I0923 10:54:17.634630   24995 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 10:54:18.244633   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-790780-m03 minikube.k8s.io/updated_at=2024_09_23T10_54_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=ha-790780 minikube.k8s.io/primary=false
	I0923 10:54:18.356200   24995 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-790780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 10:54:18.464003   24995 start.go:319] duration metric: took 23.769370572s to joinCluster
	I0923 10:54:18.464065   24995 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:54:18.464405   24995 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:54:18.465913   24995 out.go:177] * Verifying Kubernetes components...
	I0923 10:54:18.467412   24995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:54:18.756406   24995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:54:18.802392   24995 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:54:18.802611   24995 kapi.go:59] client config for ha-790780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 10:54:18.802663   24995 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.234:8443
	I0923 10:54:18.802852   24995 node_ready.go:35] waiting up to 6m0s for node "ha-790780-m03" to be "Ready" ...
	I0923 10:54:18.802919   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:18.802926   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:18.802933   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:18.802938   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:18.806473   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:19.303251   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:19.303278   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:19.303289   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:19.303297   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:19.306929   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:19.803053   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:19.803079   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:19.803087   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:19.803099   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:19.806552   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:20.303861   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:20.303887   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:20.303897   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:20.303903   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:20.307405   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:20.803113   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:20.803146   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:20.803154   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:20.803159   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:20.806146   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:20.806645   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:21.303931   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:21.303977   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:21.303989   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:21.303995   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:21.308047   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:21.803958   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:21.803978   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:21.803985   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:21.803991   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:21.807634   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:22.303112   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:22.303136   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:22.303146   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:22.303152   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:22.307111   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:22.803868   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:22.803900   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:22.803912   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:22.803918   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:22.809179   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:22.809796   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:23.303023   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:23.303042   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:23.303050   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:23.303054   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:23.306668   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:23.803788   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:23.803812   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:23.803824   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:23.803830   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:23.807293   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:24.303271   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:24.303300   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:24.303312   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:24.303319   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:24.306672   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:24.804050   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:24.804069   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:24.804078   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:24.804081   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:24.807683   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:25.303840   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:25.303859   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:25.303867   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:25.303871   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:25.306860   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:25.307495   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:25.803972   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:25.804004   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:25.804015   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:25.804020   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:25.809010   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:26.303324   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:26.303361   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:26.303373   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:26.303381   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:26.307038   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:26.803707   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:26.803726   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:26.803735   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:26.803740   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:26.807424   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:27.303612   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:27.303633   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:27.303641   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:27.303644   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:27.307111   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:27.307894   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:27.803014   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:27.803035   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:27.803042   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:27.803047   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:27.806595   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:28.303068   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:28.303091   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:28.303099   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:28.303103   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:28.306712   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:28.803340   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:28.803367   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:28.803378   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:28.803383   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:28.808838   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:29.303295   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:29.303316   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:29.303329   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:29.303334   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:29.306632   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:29.803768   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:29.803791   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:29.803799   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:29.803805   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:29.807177   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:29.807790   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:30.303713   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:30.303735   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:30.303747   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:30.303752   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:30.307209   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:30.803111   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:30.803133   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:30.803141   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:30.803149   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:30.806613   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:31.303325   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:31.303352   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:31.303371   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:31.303378   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:31.307177   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:31.803015   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:31.803038   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:31.803048   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:31.803056   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:31.806715   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:32.304018   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:32.304043   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:32.304053   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:32.304060   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:32.307932   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:32.308669   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:32.803891   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:32.803917   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:32.803926   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:32.803930   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:32.807307   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:33.303944   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:33.303964   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:33.303971   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:33.303975   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:33.307665   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:33.803624   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:33.803651   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:33.803662   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:33.803667   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:33.807257   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.303218   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:34.303244   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:34.303254   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:34.303260   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:34.306866   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.803306   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:34.803327   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:34.803334   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:34.803339   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:34.807098   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:34.807707   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:35.303220   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:35.303244   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:35.303255   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:35.303261   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:35.306357   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:35.803279   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:35.803300   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:35.803308   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:35.803311   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:35.806322   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:36.303406   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:36.303426   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:36.303434   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:36.303437   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:36.307051   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:36.804001   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:36.804025   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:36.804032   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:36.804037   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:36.807873   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:36.808340   24995 node_ready.go:53] node "ha-790780-m03" has status "Ready":"False"
	I0923 10:54:37.304023   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:37.304056   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.304068   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.304074   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.307139   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:37.803018   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:37.803040   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.803049   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.803053   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.806605   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:37.807211   24995 node_ready.go:49] node "ha-790780-m03" has status "Ready":"True"
	I0923 10:54:37.807228   24995 node_ready.go:38] duration metric: took 19.004361031s for node "ha-790780-m03" to be "Ready" ...
	I0923 10:54:37.807235   24995 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:54:37.807290   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:37.807299   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.807306   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.807314   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.813087   24995 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:54:37.819930   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.820001   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bsbth
	I0923 10:54:37.820010   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.820017   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.820021   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.822941   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.823534   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.823553   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.823564   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.823569   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.826001   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.826517   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.826537   24995 pod_ready.go:82] duration metric: took 6.583104ms for pod "coredns-7c65d6cfc9-bsbth" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.826548   24995 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.826607   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-vzhrs
	I0923 10:54:37.826617   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.826627   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.826638   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.829279   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.829843   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.829861   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.829871   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.829876   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.832424   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.832919   24995 pod_ready.go:93] pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.832933   24995 pod_ready.go:82] duration metric: took 6.374276ms for pod "coredns-7c65d6cfc9-vzhrs" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.832941   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.832999   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780
	I0923 10:54:37.833006   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.833012   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.833019   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.835776   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.836388   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:37.836406   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.836415   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.836421   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.838742   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.839384   24995 pod_ready.go:93] pod "etcd-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.839400   24995 pod_ready.go:82] duration metric: took 6.450727ms for pod "etcd-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.839411   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.839464   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m02
	I0923 10:54:37.839474   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.839484   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.839492   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.841917   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.842434   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:37.842448   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:37.842457   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:37.842463   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:37.844487   24995 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 10:54:37.844973   24995 pod_ready.go:93] pod "etcd-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:37.844988   24995 pod_ready.go:82] duration metric: took 5.569102ms for pod "etcd-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:37.844998   24995 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.003469   24995 request.go:632] Waited for 158.377606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m03
	I0923 10:54:38.003538   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/etcd-ha-790780-m03
	I0923 10:54:38.003546   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.003556   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.003563   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.007272   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.203213   24995 request.go:632] Waited for 195.30349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:38.203263   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:38.203268   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.203276   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.203283   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.206660   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.207358   24995 pod_ready.go:93] pod "etcd-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:38.207377   24995 pod_ready.go:82] duration metric: took 362.371698ms for pod "etcd-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.207393   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.403519   24995 request.go:632] Waited for 196.060085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:54:38.403591   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780
	I0923 10:54:38.403596   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.403604   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.403609   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.407248   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.603071   24995 request.go:632] Waited for 195.28673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:38.603162   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:38.603171   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.603185   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.603191   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.606368   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:38.606871   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:38.606889   24995 pod_ready.go:82] duration metric: took 399.489169ms for pod "kube-apiserver-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.606901   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:38.803863   24995 request.go:632] Waited for 196.897276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:54:38.803951   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m02
	I0923 10:54:38.803957   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:38.803965   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:38.803970   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:38.807324   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.003391   24995 request.go:632] Waited for 195.083674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:39.003447   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:39.003452   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.003459   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.003463   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.007170   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.007621   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.007637   24995 pod_ready.go:82] duration metric: took 400.728218ms for pod "kube-apiserver-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.007646   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.203104   24995 request.go:632] Waited for 195.376867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m03
	I0923 10:54:39.203174   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-790780-m03
	I0923 10:54:39.203180   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.203191   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.203199   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.207195   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.403428   24995 request.go:632] Waited for 195.367448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:39.403481   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:39.403497   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.403514   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.403518   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.407467   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.408031   24995 pod_ready.go:93] pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.408055   24995 pod_ready.go:82] duration metric: took 400.401034ms for pod "kube-apiserver-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.408068   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.604073   24995 request.go:632] Waited for 195.932476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:54:39.604147   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780
	I0923 10:54:39.604155   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.604162   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.604171   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.607668   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.803638   24995 request.go:632] Waited for 195.213228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:39.803724   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:39.803735   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:39.803743   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:39.803746   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:39.807615   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:39.808349   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:39.808366   24995 pod_ready.go:82] duration metric: took 400.287089ms for pod "kube-controller-manager-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:39.808375   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.003824   24995 request.go:632] Waited for 195.387565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:54:40.003877   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m02
	I0923 10:54:40.003882   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.003889   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.003899   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.007398   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.203651   24995 request.go:632] Waited for 195.36679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:40.203720   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:40.203725   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.203732   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.203735   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.207328   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.208124   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:40.208142   24995 pod_ready.go:82] duration metric: took 399.761139ms for pod "kube-controller-manager-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.208155   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.403086   24995 request.go:632] Waited for 194.869554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m03
	I0923 10:54:40.403150   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-790780-m03
	I0923 10:54:40.403167   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.403177   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.403187   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.407112   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.603302   24995 request.go:632] Waited for 195.339611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:40.603351   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:40.603356   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.603364   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.603368   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.606880   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:40.607541   24995 pod_ready.go:93] pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:40.607563   24995 pod_ready.go:82] duration metric: took 399.39886ms for pod "kube-controller-manager-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.607574   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:40.803473   24995 request.go:632] Waited for 195.828576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:54:40.803528   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jqwtw
	I0923 10:54:40.803533   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:40.803540   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:40.803544   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:40.807602   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:41.003253   24995 request.go:632] Waited for 194.249655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:41.003339   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:41.003350   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.003359   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.003365   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.006586   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.007310   24995 pod_ready.go:93] pod "kube-proxy-jqwtw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.007329   24995 pod_ready.go:82] duration metric: took 399.74892ms for pod "kube-proxy-jqwtw" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.007339   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rqjzc" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.203496   24995 request.go:632] Waited for 196.092833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rqjzc
	I0923 10:54:41.203562   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rqjzc
	I0923 10:54:41.203567   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.203575   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.203578   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.207204   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.403851   24995 request.go:632] Waited for 195.767978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:41.403907   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:41.403914   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.403924   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.403934   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.407303   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.407822   24995 pod_ready.go:93] pod "kube-proxy-rqjzc" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.407837   24995 pod_ready.go:82] duration metric: took 400.492538ms for pod "kube-proxy-rqjzc" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.407846   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.604077   24995 request.go:632] Waited for 196.149981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:54:41.604138   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x8fb6
	I0923 10:54:41.604148   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.604169   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.604174   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.607470   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.803470   24995 request.go:632] Waited for 195.363139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:41.803568   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:41.803577   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:41.803599   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:41.803607   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:41.806928   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:41.807802   24995 pod_ready.go:93] pod "kube-proxy-x8fb6" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:41.807821   24995 pod_ready.go:82] duration metric: took 399.96783ms for pod "kube-proxy-x8fb6" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:41.807833   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.004033   24995 request.go:632] Waited for 196.111135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:54:42.004102   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780
	I0923 10:54:42.004132   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.004143   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.004163   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.007471   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.203462   24995 request.go:632] Waited for 195.3653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:42.203523   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780
	I0923 10:54:42.203530   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.203539   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.203542   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.207322   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.207956   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:42.207977   24995 pod_ready.go:82] duration metric: took 400.13764ms for pod "kube-scheduler-ha-790780" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.207986   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.403868   24995 request.go:632] Waited for 195.812102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:54:42.403956   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m02
	I0923 10:54:42.403968   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.403980   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.403990   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.407964   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:42.603132   24995 request.go:632] Waited for 194.291839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:42.603204   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m02
	I0923 10:54:42.603209   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.603219   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.603225   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.607412   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:42.607957   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:42.607976   24995 pod_ready.go:82] duration metric: took 399.981007ms for pod "kube-scheduler-ha-790780-m02" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.607988   24995 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:42.804082   24995 request.go:632] Waited for 196.014482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m03
	I0923 10:54:42.804138   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-790780-m03
	I0923 10:54:42.804143   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:42.804150   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:42.804155   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:42.807740   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:43.003755   24995 request.go:632] Waited for 195.347939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:43.003855   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes/ha-790780-m03
	I0923 10:54:43.003875   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.003887   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.003896   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.007973   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:43.009036   24995 pod_ready.go:93] pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 10:54:43.009058   24995 pod_ready.go:82] duration metric: took 401.061758ms for pod "kube-scheduler-ha-790780-m03" in "kube-system" namespace to be "Ready" ...
	I0923 10:54:43.009074   24995 pod_ready.go:39] duration metric: took 5.201827787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:54:43.009091   24995 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:54:43.009170   24995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:54:43.027664   24995 api_server.go:72] duration metric: took 24.563557521s to wait for apiserver process to appear ...
	I0923 10:54:43.027697   24995 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:54:43.027721   24995 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0923 10:54:43.032140   24995 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0923 10:54:43.032214   24995 round_trippers.go:463] GET https://192.168.39.234:8443/version
	I0923 10:54:43.032220   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.032231   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.032238   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.033668   24995 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 10:54:43.033783   24995 api_server.go:141] control plane version: v1.31.1
	I0923 10:54:43.033805   24995 api_server.go:131] duration metric: took 6.10028ms to wait for apiserver health ...
	I0923 10:54:43.033815   24995 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:54:43.204056   24995 request.go:632] Waited for 170.168573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.204125   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.204130   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.204140   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.204147   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.210512   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:54:43.216975   24995 system_pods.go:59] 24 kube-system pods found
	I0923 10:54:43.217008   24995 system_pods.go:61] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:54:43.217015   24995 system_pods.go:61] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:54:43.217020   24995 system_pods.go:61] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:54:43.217025   24995 system_pods.go:61] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:54:43.217030   24995 system_pods.go:61] "etcd-ha-790780-m03" [a8ba763b-e2c8-476f-b55d-3801a6ebfddc] Running
	I0923 10:54:43.217035   24995 system_pods.go:61] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:54:43.217039   24995 system_pods.go:61] "kindnet-lzbx6" [8323d5a3-9987-4d80-a510-9a5631283d3b] Running
	I0923 10:54:43.217046   24995 system_pods.go:61] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:54:43.217052   24995 system_pods.go:61] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:54:43.217060   24995 system_pods.go:61] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:54:43.217065   24995 system_pods.go:61] "kube-apiserver-ha-790780-m03" [3d5a7d3c-744c-4ada-90f3-6273d634bb4b] Running
	I0923 10:54:43.217073   24995 system_pods.go:61] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:54:43.217078   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:54:43.217086   24995 system_pods.go:61] "kube-controller-manager-ha-790780-m03" [b317c61a-e51d-4a01-8591-7d447395bcb5] Running
	I0923 10:54:43.217094   24995 system_pods.go:61] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:54:43.217099   24995 system_pods.go:61] "kube-proxy-rqjzc" [ea0b4964-a74f-43f0-aebf-533661bc9537] Running
	I0923 10:54:43.217104   24995 system_pods.go:61] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:54:43.217109   24995 system_pods.go:61] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:54:43.217113   24995 system_pods.go:61] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:54:43.217118   24995 system_pods.go:61] "kube-scheduler-ha-790780-m03" [1c21e524-7e5a-4c74-97e6-04dd8d61ecbb] Running
	I0923 10:54:43.217124   24995 system_pods.go:61] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:54:43.217129   24995 system_pods.go:61] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:54:43.217137   24995 system_pods.go:61] "kube-vip-ha-790780-m03" [4336e409-5c78-4af0-8575-fe659435909a] Running
	I0923 10:54:43.217141   24995 system_pods.go:61] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:54:43.217150   24995 system_pods.go:74] duration metric: took 183.325652ms to wait for pod list to return data ...
	I0923 10:54:43.217162   24995 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:54:43.403603   24995 request.go:632] Waited for 186.357604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:54:43.403650   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/default/serviceaccounts
	I0923 10:54:43.403671   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.403685   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.403692   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.408142   24995 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:54:43.408270   24995 default_sa.go:45] found service account: "default"
	I0923 10:54:43.408289   24995 default_sa.go:55] duration metric: took 191.114244ms for default service account to be created ...
	I0923 10:54:43.408302   24995 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:54:43.603624   24995 request.go:632] Waited for 195.240427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.603680   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/namespaces/kube-system/pods
	I0923 10:54:43.603685   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.603692   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.603698   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.609933   24995 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:54:43.617043   24995 system_pods.go:86] 24 kube-system pods found
	I0923 10:54:43.617075   24995 system_pods.go:89] "coredns-7c65d6cfc9-bsbth" [5d308ec2-ea22-47f7-966c-9b0a4410c764] Running
	I0923 10:54:43.617081   24995 system_pods.go:89] "coredns-7c65d6cfc9-vzhrs" [730f9509-94d1-4b3f-b45e-bee6f2386d31] Running
	I0923 10:54:43.617085   24995 system_pods.go:89] "etcd-ha-790780" [4f987034-7c9c-42fe-8429-f02cb75aa481] Running
	I0923 10:54:43.617089   24995 system_pods.go:89] "etcd-ha-790780-m02" [1bced08f-2782-4be6-b003-5dbfe0fb17e2] Running
	I0923 10:54:43.617094   24995 system_pods.go:89] "etcd-ha-790780-m03" [a8ba763b-e2c8-476f-b55d-3801a6ebfddc] Running
	I0923 10:54:43.617098   24995 system_pods.go:89] "kindnet-5d9ww" [8d6249eb-6de3-413a-8acf-3804fd05badb] Running
	I0923 10:54:43.617101   24995 system_pods.go:89] "kindnet-lzbx6" [8323d5a3-9987-4d80-a510-9a5631283d3b] Running
	I0923 10:54:43.617105   24995 system_pods.go:89] "kindnet-x2v9d" [f3c3c925-26bd-45e0-a675-cb4a5e1fe870] Running
	I0923 10:54:43.617108   24995 system_pods.go:89] "kube-apiserver-ha-790780" [a7b8625f-5a49-4659-b0a3-2f94970e108d] Running
	I0923 10:54:43.617111   24995 system_pods.go:89] "kube-apiserver-ha-790780-m02" [a182522d-43cf-4095-9877-7077544a5bc8] Running
	I0923 10:54:43.617115   24995 system_pods.go:89] "kube-apiserver-ha-790780-m03" [3d5a7d3c-744c-4ada-90f3-6273d634bb4b] Running
	I0923 10:54:43.617118   24995 system_pods.go:89] "kube-controller-manager-ha-790780" [1649598f-f71e-4949-9ba5-53eb97b565dd] Running
	I0923 10:54:43.617123   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m02" [5c96ae18-af30-4bbf-a49f-785bdd5ce57d] Running
	I0923 10:54:43.617126   24995 system_pods.go:89] "kube-controller-manager-ha-790780-m03" [b317c61a-e51d-4a01-8591-7d447395bcb5] Running
	I0923 10:54:43.617129   24995 system_pods.go:89] "kube-proxy-jqwtw" [e60edcb9-c4a2-4116-b316-cc7777aa054f] Running
	I0923 10:54:43.617132   24995 system_pods.go:89] "kube-proxy-rqjzc" [ea0b4964-a74f-43f0-aebf-533661bc9537] Running
	I0923 10:54:43.617136   24995 system_pods.go:89] "kube-proxy-x8fb6" [75d22f16-cec1-433f-9f63-210a77c7bf02] Running
	I0923 10:54:43.617139   24995 system_pods.go:89] "kube-scheduler-ha-790780" [b21b7149-36c5-4769-9523-4eb98cbe16b6] Running
	I0923 10:54:43.617142   24995 system_pods.go:89] "kube-scheduler-ha-790780-m02" [ec3b5c3c-956f-4d56-a7c0-80aa8e2f2c2d] Running
	I0923 10:54:43.617145   24995 system_pods.go:89] "kube-scheduler-ha-790780-m03" [1c21e524-7e5a-4c74-97e6-04dd8d61ecbb] Running
	I0923 10:54:43.617148   24995 system_pods.go:89] "kube-vip-ha-790780" [428b03cd-bd5f-4781-a9b1-d07dd1a2a7fd] Running
	I0923 10:54:43.617151   24995 system_pods.go:89] "kube-vip-ha-790780-m02" [6f3fc351-b90d-4b9c-b2a5-b1197d9867a0] Running
	I0923 10:54:43.617154   24995 system_pods.go:89] "kube-vip-ha-790780-m03" [4336e409-5c78-4af0-8575-fe659435909a] Running
	I0923 10:54:43.617157   24995 system_pods.go:89] "storage-provisioner" [fd672c2c-1784-44f0-adc7-e5184ddc96f9] Running
	I0923 10:54:43.617163   24995 system_pods.go:126] duration metric: took 208.855184ms to wait for k8s-apps to be running ...
	I0923 10:54:43.617173   24995 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:54:43.617217   24995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:54:43.635389   24995 system_svc.go:56] duration metric: took 18.194216ms WaitForService to wait for kubelet
	I0923 10:54:43.635423   24995 kubeadm.go:582] duration metric: took 25.171320686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:54:43.635447   24995 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:54:43.803841   24995 request.go:632] Waited for 168.315518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.234:8443/api/v1/nodes
	I0923 10:54:43.803908   24995 round_trippers.go:463] GET https://192.168.39.234:8443/api/v1/nodes
	I0923 10:54:43.803913   24995 round_trippers.go:469] Request Headers:
	I0923 10:54:43.803920   24995 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:54:43.803924   24995 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 10:54:43.807502   24995 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:54:43.808531   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808553   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808564   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808567   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808571   24995 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 10:54:43.808574   24995 node_conditions.go:123] node cpu capacity is 2
	I0923 10:54:43.808579   24995 node_conditions.go:105] duration metric: took 173.125439ms to run NodePressure ...
	I0923 10:54:43.808592   24995 start.go:241] waiting for startup goroutines ...
	I0923 10:54:43.808611   24995 start.go:255] writing updated cluster config ...
	I0923 10:54:43.808882   24995 ssh_runner.go:195] Run: rm -f paused
	I0923 10:54:43.860687   24995 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:54:43.862725   24995 out.go:177] * Done! kubectl is now configured to use "ha-790780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.602864655Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hmsb2,Uid:8e067811-dad7-4eae-8f9f-24b6d134c3be,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727088886024038976,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:54:44.813863461Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fd672c2c-1784-44f0-adc7-e5184ddc96f9,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1727088740544614540,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-23T10:52:20.229007087Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-vzhrs,Uid:730f9509-94d1-4b3f-b45e-bee6f2386d31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727088740539260909,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:52:20.226442275Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bsbth,Uid:5d308ec2-ea22-47f7-966c-9b0a4410c764,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727088740537164598,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:52:20.219468289Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&PodSandboxMetadata{Name:kube-proxy-jqwtw,Uid:e60edcb9-c4a2-4116-b316-cc7777aa054f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727088728882963449,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-23T10:52:07.073572528Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&PodSandboxMetadata{Name:kindnet-5d9ww,Uid:8d6249eb-6de3-413a-8acf-3804fd05badb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727088727976065997,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:52:07.068777040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-790780,Uid:f67c31e4930aaac3c497cb111135e696,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1727088715999691727,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{kubernetes.io/config.hash: f67c31e4930aaac3c497cb111135e696,kubernetes.io/config.seen: 2024-09-23T10:51:55.497632478Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-790780,Uid:255812681d1a0e612e49bf2f9931ab5b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727088715998761682,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,tier: control-plane,},Annotations:map[string]string{kube
rnetes.io/config.hash: 255812681d1a0e612e49bf2f9931ab5b,kubernetes.io/config.seen: 2024-09-23T10:51:55.497630432Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-790780,Uid:61ebdcec6eabb6584f7929ac2d99660f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727088715983468229,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 61ebdcec6eabb6584f7929ac2d99660f,kubernetes.io/config.seen: 2024-09-23T10:51:55.497631438Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&PodSandboxMetadata{Name:etcd-ha-790780,Uid:15d010bb
b48c46b1437d3cf7cda623bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727088715970997879,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.234:2379,kubernetes.io/config.hash: 15d010bbb48c46b1437d3cf7cda623bc,kubernetes.io/config.seen: 2024-09-23T10:51:55.497625714Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-790780,Uid:292a50d5f74643d055dd7bcfbab1dbaf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727088715970103683,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.234:8443,kubernetes.io/config.hash: 292a50d5f74643d055dd7bcfbab1dbaf,kubernetes.io/config.seen: 2024-09-23T10:51:55.497629266Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=95eab798-64d8-4fc8-9134-3b1a32ae8161 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.603641788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d38a4470-e7cc-4961-b1b1-c4403756611f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.603703772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d38a4470-e7cc-4961-b1b1-c4403756611f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.604016240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d38a4470-e7cc-4961-b1b1-c4403756611f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.609994074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07116a1c-90bf-424d-b807-944fec4600a1 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.610050755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07116a1c-90bf-424d-b807-944fec4600a1 name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.611189753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fbd67ab-be40-4b93-b26e-e8b33387998a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.611748889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089118611720560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fbd67ab-be40-4b93-b26e-e8b33387998a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.612459881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4529541-80ce-425c-b0b4-b66cc73850aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.612511309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4529541-80ce-425c-b0b4-b66cc73850aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.612735126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4529541-80ce-425c-b0b4-b66cc73850aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.648603605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a62dae5d-061b-4a25-9cec-ed87e963b46c name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.648705597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a62dae5d-061b-4a25-9cec-ed87e963b46c name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.649696782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b921354-3cbb-41e4-807a-dde087a3d2b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.650112410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089118650089923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b921354-3cbb-41e4-807a-dde087a3d2b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.650643969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8142eb8e-82af-46c1-8478-acffbb998714 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.650703551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8142eb8e-82af-46c1-8478-acffbb998714 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.650966227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8142eb8e-82af-46c1-8478-acffbb998714 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.692168933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec43c27a-aabc-4996-b99f-9ce801368cbe name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.692266117Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec43c27a-aabc-4996-b99f-9ce801368cbe name=/runtime.v1.RuntimeService/Version
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.693498897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=515e6c45-d880-4582-bc21-792758d5d74d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.694020452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089118693996663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=515e6c45-d880-4582-bc21-792758d5d74d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.694621974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db9ed9bc-10e9-421d-8a6d-d3b846d262a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.694708356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db9ed9bc-10e9-421d-8a6d-d3b846d262a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 10:58:38 ha-790780 crio[667]: time="2024-09-23 10:58:38.695021691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727088889397776055,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504391361e9f40aabda1ccac9cc1ce267e46c9513c909cd87b671db16b213a48,PodSandboxId:e1bfaf78434891d2f951ff6600532dd9c245482186e0021bc2495911f607d184,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727088740810057450,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740832931018,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727088740768165410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea
22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172708872
8991869999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727088728409241952,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d7d0f860c2c3ec0f495cce0d7c1bb4fe78f9cd8204a47d28954f8af090cb29,PodSandboxId:2b178d8dcf3adad8e0d65cb746cceccf9a6f6982118ed2400831f5f707a5e336,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727088719314298916,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c31e4930aaac3c497cb111135e696,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727088716268304289,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d,PodSandboxId:d65f8d57327b033ebee51fea52480dd4b45441f10891f709bdcc6417fddd63eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727088716264830646,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d,PodSandboxId:9e910662aa47013f6130cfda39eb9520d52b7fe7ec90f0927bb8f0041bf7d783,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727088716180501386,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727088716120929006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db9ed9bc-10e9-421d-8a6d-d3b846d262a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b6cdb320cb12       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   64b2fb317bf54       busybox-7dff88458-hmsb2
	fceea5af30884       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   7f70accb19994       coredns-7c65d6cfc9-vzhrs
	504391361e9f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e1bfaf7843489       storage-provisioner
	8f008021913ac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   61e4d18ef53ff       coredns-7c65d6cfc9-bsbth
	20dea9bfd7b93       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   12e4b7f578705       kube-proxy-jqwtw
	70e8cba43f15f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   a1aa2ae427e36       kindnet-5d9ww
	58d7d0f860c2c       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2b178d8dcf3ad       kube-vip-ha-790780
	579e069dd212e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   d632e3d4755d2       kube-scheduler-ha-790780
	4881d47948f52       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   d65f8d57327b0       kube-controller-manager-ha-790780
	f13343b3ed39e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9e910662aa470       kube-apiserver-ha-790780
	621532bf94f06       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   cf20e920bbbdf       etcd-ha-790780
	
	
	==> coredns [8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927] <==
	[INFO] 10.244.1.2:59395 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000129294s
	[INFO] 10.244.1.2:33748 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00097443s
	[INFO] 10.244.0.4:46523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219823s
	[INFO] 10.244.2.2:35535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239865s
	[INFO] 10.244.2.2:36372 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017141396s
	[INFO] 10.244.2.2:50254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209403s
	[INFO] 10.244.1.2:48243 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198306s
	[INFO] 10.244.1.2:39091 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230366s
	[INFO] 10.244.1.2:49543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199975s
	[INFO] 10.244.0.4:45173 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102778s
	[INFO] 10.244.0.4:32836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736533s
	[INFO] 10.244.0.4:44659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129519s
	[INFO] 10.244.0.4:54433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098668s
	[INFO] 10.244.0.4:37772 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.2.2:43894 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134793s
	[INFO] 10.244.2.2:34604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147389s
	[INFO] 10.244.1.2:53532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242838s
	[INFO] 10.244.1.2:45804 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159901s
	[INFO] 10.244.1.2:39298 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112738s
	[INFO] 10.244.0.4:43692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093071s
	[INFO] 10.244.0.4:51414 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096722s
	[INFO] 10.244.2.2:56355 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295938s
	[INFO] 10.244.1.2:59520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142399s
	[INFO] 10.244.0.4:55347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090911s
	[INFO] 10.244.0.4:53926 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114353s
	
	
	==> coredns [fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc] <==
	[INFO] 10.244.2.2:49856 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000346472s
	[INFO] 10.244.2.2:58522 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173747s
	[INFO] 10.244.2.2:60029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181162s
	[INFO] 10.244.2.2:38618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184142s
	[INFO] 10.244.1.2:46063 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001758433s
	[INFO] 10.244.1.2:60295 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001402726s
	[INFO] 10.244.1.2:38240 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160236s
	[INFO] 10.244.1.2:41977 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113581s
	[INFO] 10.244.1.2:44892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133741s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105848s
	[INFO] 10.244.0.4:58776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144697s
	[INFO] 10.244.0.4:33311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001202009s
	[INFO] 10.244.2.2:57039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019058s
	[INFO] 10.244.2.2:57127 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153386s
	[INFO] 10.244.1.2:52843 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168874s
	[INFO] 10.244.0.4:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014121s
	[INFO] 10.244.0.4:38864 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079009s
	[INFO] 10.244.2.2:47502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158927s
	[INFO] 10.244.2.2:57106 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185408s
	[INFO] 10.244.2.2:34447 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139026s
	[INFO] 10.244.1.2:59976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015634s
	[INFO] 10.244.1.2:53446 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288738s
	[INFO] 10.244.1.2:52114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166821s
	[INFO] 10.244.0.4:54732 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099319s
	[INFO] 10.244.0.4:49290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071388s
	
	
	==> describe nodes <==
	Name:               ha-790780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_52_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:06 +0000   Mon, 23 Sep 2024 10:52:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-790780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4137f4910e0940f183cebcb2073b69b7
	  System UUID:                4137f491-0e09-40f1-83ce-bcb2073b69b7
	  Boot ID:                    d20b206f-6d12-4950-af76-836822976902
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmsb2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 coredns-7c65d6cfc9-bsbth             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 coredns-7c65d6cfc9-vzhrs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 etcd-ha-790780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m37s
	  kube-system                 kindnet-5d9ww                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m32s
	  kube-system                 kube-apiserver-ha-790780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-ha-790780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-jqwtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-scheduler-ha-790780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-vip-ha-790780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m29s  kube-proxy       
	  Normal  Starting                 6m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m37s  kubelet          Node ha-790780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s  kubelet          Node ha-790780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s  kubelet          Node ha-790780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m33s  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal  NodeReady                6m19s  kubelet          Node ha-790780 status is now: NodeReady
	  Normal  RegisteredNode           5m32s  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal  RegisteredNode           4m16s  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	
	
	Name:               ha-790780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_53_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:56:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 10:55:01 +0000   Mon, 23 Sep 2024 10:56:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-790780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87f6f3c7af44480934336376709a0c8
	  System UUID:                f87f6f3c-7af4-4480-9343-36376709a0c8
	  Boot ID:                    869cdc79-44fe-45ec-baeb-66b85d8eb577
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hdk9n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-790780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m38s
	  kube-system                 kindnet-x2v9d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m40s
	  kube-system                 kube-apiserver-ha-790780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-controller-manager-ha-790780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-x8fb6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-scheduler-ha-790780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-vip-ha-790780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node ha-790780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m40s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m38s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-790780-m02 status is now: NodeNotReady
	
	
	Name:               ha-790780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_54_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:54:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:16 +0000   Mon, 23 Sep 2024 10:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-790780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a2525d1b15b4365a533b4fbbc7d76d5
	  System UUID:                8a2525d1-b15b-4365-a533-b4fbbc7d76d5
	  Boot ID:                    a7b3ffe3-56b6-4c77-b8bb-b94fecea7ce9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2f4vm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-790780-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m23s
	  kube-system                 kindnet-lzbx6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m24s
	  kube-system                 kube-apiserver-ha-790780-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-ha-790780-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-proxy-rqjzc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-scheduler-ha-790780-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-vip-ha-790780-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node ha-790780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	
	
	Name:               ha-790780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_55_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:55:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:58:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:55:55 +0000   Mon, 23 Sep 2024 10:55:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-790780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8bb8bb71d764d5397c864a970ca06f0
	  System UUID:                a8bb8bb7-1d76-4d53-97c8-64a970ca06f0
	  Boot ID:                    43fa98cd-88cb-492d-a6f8-c4d1f11bcb1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sz6cc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-58k4g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  Starting                 3m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m15s)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m15s)  kubelet          Node ha-790780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m15s)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-790780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 10:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050514] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040290] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.807632] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.451360] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.609594] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.519719] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055679] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057192] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.186843] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114356] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.269409] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.949380] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.106869] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.060266] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 10:52] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.081963] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.787202] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.501695] kauditd_printk_skb: 41 callbacks suppressed
	[Sep23 10:53] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989] <==
	{"level":"warn","ts":"2024-09-23T10:58:38.504819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:38.605574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:38.705511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:38.746592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:38.805524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:38.904710Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:38.976655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:38.983696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:38.987937Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.020797Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.027237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.034097Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.041801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.048261Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.052073Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.055169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.060658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.066767Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.072774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.076676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.079511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.083956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.090347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.096920Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T10:58:39.104779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"de9917ec5c740094","from":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:58:39 up 7 min,  0 users,  load average: 0.30, 0.34, 0.18
	Linux ha-790780 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9] <==
	I0923 10:57:59.683870       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:09.674500       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:58:09.674559       1 main.go:299] handling current node
	I0923 10:58:09.674578       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:58:09.674587       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:58:09.674781       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:58:09.674808       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:09.674853       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:58:09.674859       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 10:58:19.676409       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:58:19.676470       1 main.go:299] handling current node
	I0923 10:58:19.676501       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:58:19.676506       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:58:19.676695       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:58:19.676726       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:19.676792       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:58:19.676813       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 10:58:29.683950       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 10:58:29.684192       1 main.go:299] handling current node
	I0923 10:58:29.684303       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 10:58:29.684447       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 10:58:29.685323       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 10:58:29.685472       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 10:58:29.685646       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 10:58:29.685828       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f13343b3ed39eea629fa38c79eec8b7f9a63eae532aa54669eeeae0817e44e4d] <==
	I0923 10:52:02.470272       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 10:52:02.487288       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 10:52:02.636999       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 10:52:06.966628       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 10:52:07.024027       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0923 10:54:15.771868       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.772121       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.642µs, panicked: false, err: <nil>, panic-reason: <nil>" logger="UnhandledError"
	E0923 10:54:15.773436       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.774650       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 10:54:15.775958       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.219249ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0923 10:54:50.840870       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42568: use of closed network connection
	E0923 10:54:51.046928       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42582: use of closed network connection
	E0923 10:54:51.239325       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42598: use of closed network connection
	E0923 10:54:51.469344       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42622: use of closed network connection
	E0923 10:54:51.662336       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42652: use of closed network connection
	E0923 10:54:51.840022       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42678: use of closed network connection
	E0923 10:54:52.023650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42708: use of closed network connection
	E0923 10:54:52.216046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42724: use of closed network connection
	E0923 10:54:52.402748       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42750: use of closed network connection
	E0923 10:54:52.693691       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42788: use of closed network connection
	E0923 10:54:52.868191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42814: use of closed network connection
	E0923 10:54:53.230910       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42838: use of closed network connection
	E0923 10:54:53.405713       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42860: use of closed network connection
	E0923 10:54:53.587256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42870: use of closed network connection
	W0923 10:56:21.308721       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.234]
	
	
	==> kube-controller-manager [4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d] <==
	I0923 10:55:25.124525       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-790780-m04" podCIDRs=["10.244.3.0/24"]
	I0923 10:55:25.124586       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.124620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.133509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.356496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:25.728032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:26.243588       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-790780-m04"
	I0923 10:55:26.283171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:27.507667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:27.553251       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:28.470149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:28.543154       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:35.178257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.206243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.206426       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-790780-m04"
	I0923 10:55:46.224292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:46.262261       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:55:55.382846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 10:56:46.290698       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-790780-m04"
	I0923 10:56:46.290858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:46.314933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:46.418190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.658083ms"
	I0923 10:56:46.418270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.621µs"
	I0923 10:56:48.568648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 10:56:51.466837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	
	
	==> kube-proxy [20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 10:52:09.262552       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 10:52:09.284499       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.234"]
	E0923 10:52:09.284588       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:52:09.317271       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 10:52:09.317394       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 10:52:09.317457       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:52:09.320801       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:52:09.321989       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:52:09.322038       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:52:09.326499       1 config.go:199] "Starting service config controller"
	I0923 10:52:09.327483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:52:09.328524       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:52:09.328570       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:52:09.331934       1 config.go:328] "Starting node config controller"
	I0923 10:52:09.331976       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:52:09.428869       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:52:09.429192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:52:09.432816       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e] <==
	E0923 10:52:00.723488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:52:00.842918       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:52:00.843015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:52:03.091035       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 10:54:44.751853       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="8af6924d-0142-47f2-8cbe-927fbdaa50d7" pod="default/busybox-7dff88458-hdk9n" assumedNode="ha-790780-m02" currentNode="ha-790780-m03"
	E0923 10:54:44.780763       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hdk9n\": pod busybox-7dff88458-hdk9n is already assigned to node \"ha-790780-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-hdk9n" node="ha-790780-m03"
	E0923 10:54:44.781985       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8af6924d-0142-47f2-8cbe-927fbdaa50d7(default/busybox-7dff88458-hdk9n) was assumed on ha-790780-m03 but assigned to ha-790780-m02" pod="default/busybox-7dff88458-hdk9n"
	E0923 10:54:44.782087       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-hdk9n\": pod busybox-7dff88458-hdk9n is already assigned to node \"ha-790780-m02\"" pod="default/busybox-7dff88458-hdk9n"
	I0923 10:54:44.782173       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-hdk9n" node="ha-790780-m02"
	E0923 10:55:25.174653       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xmfxv\": pod kindnet-xmfxv is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xmfxv" node="ha-790780-m04"
	E0923 10:55:25.174983       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xmfxv\": pod kindnet-xmfxv is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-xmfxv"
	E0923 10:55:25.175545       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-58k4g\": pod kube-proxy-58k4g is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-58k4g" node="ha-790780-m04"
	E0923 10:55:25.178321       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-58k4g\": pod kube-proxy-58k4g is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-58k4g"
	E0923 10:55:25.223677       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.224053       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 143d16c9-72ab-4693-86a9-227280e3d88b(kube-system/kindnet-rhmrv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rhmrv"
	E0923 10:55:25.224238       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-rhmrv"
	I0923 10:55:25.224407       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.257675       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.257807       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 20bf7e97-ed43-402a-b267-4c1d2f4b5bbf(kube-system/kindnet-sz6cc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sz6cc"
	E0923 10:55:25.257863       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-sz6cc"
	I0923 10:55:25.257906       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.260301       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	E0923 10:55:25.260462       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6f2d4b5-c6d7-4f34-b81a-2644640ae3bb(kube-system/kube-proxy-ghvw7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvw7"
	E0923 10:55:25.260529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-ghvw7"
	I0923 10:55:25.260575       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	
	
	==> kubelet <==
	Sep 23 10:57:02 ha-790780 kubelet[1310]: E0923 10:57:02.752554    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089022751963172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:02 ha-790780 kubelet[1310]: E0923 10:57:02.752656    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089022751963172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:12 ha-790780 kubelet[1310]: E0923 10:57:12.759306    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089032758260960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:12 ha-790780 kubelet[1310]: E0923 10:57:12.759943    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089032758260960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:22 ha-790780 kubelet[1310]: E0923 10:57:22.761662    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089042761344235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:22 ha-790780 kubelet[1310]: E0923 10:57:22.761739    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089042761344235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:32 ha-790780 kubelet[1310]: E0923 10:57:32.763857    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089052763529781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:32 ha-790780 kubelet[1310]: E0923 10:57:32.763900    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089052763529781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:42 ha-790780 kubelet[1310]: E0923 10:57:42.767538    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089062766959170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:42 ha-790780 kubelet[1310]: E0923 10:57:42.767974    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089062766959170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:52 ha-790780 kubelet[1310]: E0923 10:57:52.770316    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089072770030326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:57:52 ha-790780 kubelet[1310]: E0923 10:57:52.770429    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089072770030326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.632462    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 10:58:02 ha-790780 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 10:58:02 ha-790780 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.773513    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089082773175802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:02 ha-790780 kubelet[1310]: E0923 10:58:02.773536    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089082773175802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:12 ha-790780 kubelet[1310]: E0923 10:58:12.775728    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089092775452254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:12 ha-790780 kubelet[1310]: E0923 10:58:12.775771    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089092775452254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:22 ha-790780 kubelet[1310]: E0923 10:58:22.777799    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089102777431416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:22 ha-790780 kubelet[1310]: E0923 10:58:22.778161    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089102777431416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:32 ha-790780 kubelet[1310]: E0923 10:58:32.780237    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089112779957598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:58:32 ha-790780 kubelet[1310]: E0923 10:58:32.780276    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089112779957598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-790780 -n ha-790780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-790780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (415.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-790780 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-790780 -v=7 --alsologtostderr
E0923 10:59:15.431009   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-790780 -v=7 --alsologtostderr: exit status 82 (2m1.87744792s)

                                                
                                                
-- stdout --
	* Stopping node "ha-790780-m04"  ...
	* Stopping node "ha-790780-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:58:44.186211   30168 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:58:44.186495   30168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:58:44.186505   30168 out.go:358] Setting ErrFile to fd 2...
	I0923 10:58:44.186511   30168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:58:44.186692   30168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:58:44.186973   30168 out.go:352] Setting JSON to false
	I0923 10:58:44.187077   30168 mustload.go:65] Loading cluster: ha-790780
	I0923 10:58:44.187472   30168 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:58:44.187572   30168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 10:58:44.187757   30168 mustload.go:65] Loading cluster: ha-790780
	I0923 10:58:44.187909   30168 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:58:44.187950   30168 stop.go:39] StopHost: ha-790780-m04
	I0923 10:58:44.188322   30168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:58:44.188392   30168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:58:44.203254   30168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0923 10:58:44.203773   30168 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:58:44.204407   30168 main.go:141] libmachine: Using API Version  1
	I0923 10:58:44.204432   30168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:58:44.204749   30168 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:58:44.207103   30168 out.go:177] * Stopping node "ha-790780-m04"  ...
	I0923 10:58:44.208214   30168 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0923 10:58:44.208235   30168 main.go:141] libmachine: (ha-790780-m04) Calling .DriverName
	I0923 10:58:44.208416   30168 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0923 10:58:44.208437   30168 main.go:141] libmachine: (ha-790780-m04) Calling .GetSSHHostname
	I0923 10:58:44.210901   30168 main.go:141] libmachine: (ha-790780-m04) DBG | domain ha-790780-m04 has defined MAC address 52:54:00:3a:9e:f2 in network mk-ha-790780
	I0923 10:58:44.211287   30168 main.go:141] libmachine: (ha-790780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:9e:f2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:55:09 +0000 UTC Type:0 Mac:52:54:00:3a:9e:f2 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-790780-m04 Clientid:01:52:54:00:3a:9e:f2}
	I0923 10:58:44.211312   30168 main.go:141] libmachine: (ha-790780-m04) DBG | domain ha-790780-m04 has defined IP address 192.168.39.134 and MAC address 52:54:00:3a:9e:f2 in network mk-ha-790780
	I0923 10:58:44.211412   30168 main.go:141] libmachine: (ha-790780-m04) Calling .GetSSHPort
	I0923 10:58:44.211585   30168 main.go:141] libmachine: (ha-790780-m04) Calling .GetSSHKeyPath
	I0923 10:58:44.211727   30168 main.go:141] libmachine: (ha-790780-m04) Calling .GetSSHUsername
	I0923 10:58:44.211884   30168 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m04/id_rsa Username:docker}
	I0923 10:58:44.299787   30168 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0923 10:58:44.355074   30168 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0923 10:58:44.408966   30168 main.go:141] libmachine: Stopping "ha-790780-m04"...
	I0923 10:58:44.409003   30168 main.go:141] libmachine: (ha-790780-m04) Calling .GetState
	I0923 10:58:44.410277   30168 main.go:141] libmachine: (ha-790780-m04) Calling .Stop
	I0923 10:58:44.413400   30168 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 0/120
	I0923 10:58:45.599369   30168 main.go:141] libmachine: (ha-790780-m04) Calling .GetState
	I0923 10:58:45.600405   30168 main.go:141] libmachine: Machine "ha-790780-m04" was stopped.
	I0923 10:58:45.600432   30168 stop.go:75] duration metric: took 1.392209218s to stop
	I0923 10:58:45.600454   30168 stop.go:39] StopHost: ha-790780-m03
	I0923 10:58:45.600736   30168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:58:45.600775   30168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:58:45.615147   30168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0923 10:58:45.615586   30168 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:58:45.616047   30168 main.go:141] libmachine: Using API Version  1
	I0923 10:58:45.616068   30168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:58:45.616406   30168 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:58:45.618492   30168 out.go:177] * Stopping node "ha-790780-m03"  ...
	I0923 10:58:45.619792   30168 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0923 10:58:45.619825   30168 main.go:141] libmachine: (ha-790780-m03) Calling .DriverName
	I0923 10:58:45.620035   30168 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0923 10:58:45.620058   30168 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHHostname
	I0923 10:58:45.622598   30168 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:58:45.623060   30168 main.go:141] libmachine: (ha-790780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:88:d2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:53:40 +0000 UTC Type:0 Mac:52:54:00:da:88:d2 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-790780-m03 Clientid:01:52:54:00:da:88:d2}
	I0923 10:58:45.623093   30168 main.go:141] libmachine: (ha-790780-m03) DBG | domain ha-790780-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:da:88:d2 in network mk-ha-790780
	I0923 10:58:45.623225   30168 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHPort
	I0923 10:58:45.623388   30168 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHKeyPath
	I0923 10:58:45.623545   30168 main.go:141] libmachine: (ha-790780-m03) Calling .GetSSHUsername
	I0923 10:58:45.623684   30168 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m03/id_rsa Username:docker}
	I0923 10:58:45.710741   30168 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0923 10:58:45.764664   30168 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0923 10:58:45.821085   30168 main.go:141] libmachine: Stopping "ha-790780-m03"...
	I0923 10:58:45.821122   30168 main.go:141] libmachine: (ha-790780-m03) Calling .GetState
	I0923 10:58:45.822700   30168 main.go:141] libmachine: (ha-790780-m03) Calling .Stop
	I0923 10:58:45.826205   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 0/120
	I0923 10:58:46.827752   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 1/120
	I0923 10:58:47.829336   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 2/120
	I0923 10:58:48.830670   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 3/120
	I0923 10:58:49.831884   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 4/120
	I0923 10:58:50.833783   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 5/120
	I0923 10:58:51.835387   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 6/120
	I0923 10:58:52.836820   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 7/120
	I0923 10:58:53.838000   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 8/120
	I0923 10:58:54.839473   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 9/120
	I0923 10:58:55.841728   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 10/120
	I0923 10:58:56.843187   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 11/120
	I0923 10:58:57.844822   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 12/120
	I0923 10:58:58.846336   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 13/120
	I0923 10:58:59.847574   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 14/120
	I0923 10:59:00.849261   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 15/120
	I0923 10:59:01.850695   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 16/120
	I0923 10:59:02.852416   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 17/120
	I0923 10:59:03.854542   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 18/120
	I0923 10:59:04.856181   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 19/120
	I0923 10:59:05.858268   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 20/120
	I0923 10:59:06.859876   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 21/120
	I0923 10:59:07.861574   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 22/120
	I0923 10:59:08.862988   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 23/120
	I0923 10:59:09.864882   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 24/120
	I0923 10:59:10.866743   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 25/120
	I0923 10:59:11.868401   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 26/120
	I0923 10:59:12.869917   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 27/120
	I0923 10:59:13.871476   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 28/120
	I0923 10:59:14.873125   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 29/120
	I0923 10:59:15.875500   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 30/120
	I0923 10:59:16.877178   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 31/120
	I0923 10:59:17.878663   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 32/120
	I0923 10:59:18.880250   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 33/120
	I0923 10:59:19.881497   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 34/120
	I0923 10:59:20.883239   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 35/120
	I0923 10:59:21.884510   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 36/120
	I0923 10:59:22.886021   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 37/120
	I0923 10:59:23.887259   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 38/120
	I0923 10:59:24.888510   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 39/120
	I0923 10:59:25.890557   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 40/120
	I0923 10:59:26.891929   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 41/120
	I0923 10:59:27.893476   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 42/120
	I0923 10:59:28.894897   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 43/120
	I0923 10:59:29.896243   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 44/120
	I0923 10:59:30.897983   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 45/120
	I0923 10:59:31.900205   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 46/120
	I0923 10:59:32.901560   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 47/120
	I0923 10:59:33.904011   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 48/120
	I0923 10:59:34.905245   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 49/120
	I0923 10:59:35.906969   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 50/120
	I0923 10:59:36.908215   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 51/120
	I0923 10:59:37.910212   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 52/120
	I0923 10:59:38.911732   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 53/120
	I0923 10:59:39.913093   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 54/120
	I0923 10:59:40.914775   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 55/120
	I0923 10:59:41.916260   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 56/120
	I0923 10:59:42.917695   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 57/120
	I0923 10:59:43.919911   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 58/120
	I0923 10:59:44.921122   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 59/120
	I0923 10:59:45.922862   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 60/120
	I0923 10:59:46.924124   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 61/120
	I0923 10:59:47.925636   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 62/120
	I0923 10:59:48.928022   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 63/120
	I0923 10:59:49.929345   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 64/120
	I0923 10:59:50.931358   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 65/120
	I0923 10:59:51.932676   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 66/120
	I0923 10:59:52.934080   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 67/120
	I0923 10:59:53.935840   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 68/120
	I0923 10:59:54.937266   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 69/120
	I0923 10:59:55.938983   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 70/120
	I0923 10:59:56.940379   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 71/120
	I0923 10:59:57.941804   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 72/120
	I0923 10:59:58.943163   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 73/120
	I0923 10:59:59.945299   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 74/120
	I0923 11:00:00.946742   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 75/120
	I0923 11:00:01.948355   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 76/120
	I0923 11:00:02.949996   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 77/120
	I0923 11:00:03.951383   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 78/120
	I0923 11:00:04.952685   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 79/120
	I0923 11:00:05.953967   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 80/120
	I0923 11:00:06.955839   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 81/120
	I0923 11:00:07.957167   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 82/120
	I0923 11:00:08.958742   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 83/120
	I0923 11:00:09.960053   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 84/120
	I0923 11:00:10.961415   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 85/120
	I0923 11:00:11.962706   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 86/120
	I0923 11:00:12.964004   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 87/120
	I0923 11:00:13.965535   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 88/120
	I0923 11:00:14.966786   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 89/120
	I0923 11:00:15.968540   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 90/120
	I0923 11:00:16.969920   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 91/120
	I0923 11:00:17.971228   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 92/120
	I0923 11:00:18.972575   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 93/120
	I0923 11:00:19.974067   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 94/120
	I0923 11:00:20.975376   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 95/120
	I0923 11:00:21.976784   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 96/120
	I0923 11:00:22.978210   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 97/120
	I0923 11:00:23.979661   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 98/120
	I0923 11:00:24.981235   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 99/120
	I0923 11:00:25.982947   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 100/120
	I0923 11:00:26.984310   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 101/120
	I0923 11:00:27.985710   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 102/120
	I0923 11:00:28.987299   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 103/120
	I0923 11:00:29.989422   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 104/120
	I0923 11:00:30.991194   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 105/120
	I0923 11:00:31.992444   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 106/120
	I0923 11:00:32.993851   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 107/120
	I0923 11:00:33.995194   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 108/120
	I0923 11:00:34.996527   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 109/120
	I0923 11:00:35.998925   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 110/120
	I0923 11:00:37.001043   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 111/120
	I0923 11:00:38.002640   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 112/120
	I0923 11:00:39.004048   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 113/120
	I0923 11:00:40.005512   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 114/120
	I0923 11:00:41.007773   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 115/120
	I0923 11:00:42.009097   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 116/120
	I0923 11:00:43.011124   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 117/120
	I0923 11:00:44.012527   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 118/120
	I0923 11:00:45.013892   30168 main.go:141] libmachine: (ha-790780-m03) Waiting for machine to stop 119/120
	I0923 11:00:46.014572   30168 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0923 11:00:46.014638   30168 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0923 11:00:46.016676   30168 out.go:201] 
	W0923 11:00:46.018298   30168 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0923 11:00:46.018317   30168 out.go:270] * 
	* 
	W0923 11:00:46.020597   30168 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 11:00:46.021905   30168 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-790780 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-790780 --wait=true -v=7 --alsologtostderr
E0923 11:00:57.441029   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:01:25.143074   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:04:15.431678   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-790780 --wait=true -v=7 --alsologtostderr: (4m50.945717108s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-790780
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-790780 -n ha-790780
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 logs -n 25: (1.787318342s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m04 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp testdata/cp-test.txt                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780:/home/docker/cp-test_ha-790780-m04_ha-790780.txt                      |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780 sudo cat                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780.txt                                |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03:/home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m03 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-790780 node stop m02 -v=7                                                    | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-790780 node start m02 -v=7                                                   | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:58 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-790780 -v=7                                                          | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:58 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-790780 -v=7                                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:58 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-790780 --wait=true -v=7                                                   | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 11:00 UTC | 23 Sep 24 11:05 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-790780                                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 11:05 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:00:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:00:46.064406   30645 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:00:46.064645   30645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:00:46.064654   30645 out.go:358] Setting ErrFile to fd 2...
	I0923 11:00:46.064658   30645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:00:46.064828   30645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:00:46.065338   30645 out.go:352] Setting JSON to false
	I0923 11:00:46.066226   30645 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2589,"bootTime":1727086657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:00:46.066317   30645 start.go:139] virtualization: kvm guest
	I0923 11:00:46.068495   30645 out.go:177] * [ha-790780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:00:46.069866   30645 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:00:46.069875   30645 notify.go:220] Checking for updates...
	I0923 11:00:46.072176   30645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:00:46.073500   30645 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:00:46.074669   30645 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:00:46.075743   30645 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:00:46.077023   30645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:00:46.078681   30645 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:00:46.078766   30645 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:00:46.079183   30645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:00:46.079227   30645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:00:46.093942   30645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I0923 11:00:46.094355   30645 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:00:46.094816   30645 main.go:141] libmachine: Using API Version  1
	I0923 11:00:46.094832   30645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:00:46.095251   30645 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:00:46.095445   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:00:46.129327   30645 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 11:00:46.130722   30645 start.go:297] selected driver: kvm2
	I0923 11:00:46.130737   30645 start.go:901] validating driver "kvm2" against &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:00:46.130877   30645 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:00:46.131244   30645 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:00:46.131332   30645 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:00:46.145982   30645 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:00:46.146672   30645 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:00:46.146704   30645 cni.go:84] Creating CNI manager for ""
	I0923 11:00:46.146766   30645 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 11:00:46.146850   30645 start.go:340] cluster config:
	{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:00:46.147043   30645 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:00:46.149786   30645 out.go:177] * Starting "ha-790780" primary control-plane node in "ha-790780" cluster
	I0923 11:00:46.151102   30645 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:00:46.151155   30645 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 11:00:46.151168   30645 cache.go:56] Caching tarball of preloaded images
	I0923 11:00:46.151255   30645 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 11:00:46.151267   30645 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 11:00:46.151429   30645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 11:00:46.151645   30645 start.go:360] acquireMachinesLock for ha-790780: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:00:46.151703   30645 start.go:364] duration metric: took 36.766µs to acquireMachinesLock for "ha-790780"
	I0923 11:00:46.151722   30645 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:00:46.151729   30645 fix.go:54] fixHost starting: 
	I0923 11:00:46.151985   30645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:00:46.152022   30645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:00:46.166913   30645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0923 11:00:46.167278   30645 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:00:46.167671   30645 main.go:141] libmachine: Using API Version  1
	I0923 11:00:46.167685   30645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:00:46.168025   30645 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:00:46.168190   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:00:46.168326   30645 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 11:00:46.169976   30645 fix.go:112] recreateIfNeeded on ha-790780: state=Running err=<nil>
	W0923 11:00:46.169996   30645 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:00:46.171880   30645 out.go:177] * Updating the running kvm2 "ha-790780" VM ...
	I0923 11:00:46.173241   30645 machine.go:93] provisionDockerMachine start ...
	I0923 11:00:46.173264   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:00:46.173466   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.175686   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.176082   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.176103   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.176267   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.176440   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.176592   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.176733   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.176885   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:00:46.177078   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:00:46.177088   30645 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:00:46.286735   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780
	
	I0923 11:00:46.286767   30645 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 11:00:46.287006   30645 buildroot.go:166] provisioning hostname "ha-790780"
	I0923 11:00:46.287028   30645 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 11:00:46.287222   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.290117   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.290470   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.290494   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.290689   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.290854   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.291004   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.291143   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.291264   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:00:46.291441   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:00:46.291455   30645 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780 && echo "ha-790780" | sudo tee /etc/hostname
	I0923 11:00:46.419999   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780
	
	I0923 11:00:46.420021   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.422746   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.423161   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.423190   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.423324   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.423507   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.423708   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.423871   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.424019   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:00:46.424299   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:00:46.424321   30645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:00:46.539065   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:00:46.539090   30645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 11:00:46.539130   30645 buildroot.go:174] setting up certificates
	I0923 11:00:46.539157   30645 provision.go:84] configureAuth start
	I0923 11:00:46.539186   30645 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 11:00:46.539468   30645 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 11:00:46.542430   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.542796   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.542824   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.542977   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.545452   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.545864   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.545892   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.546051   30645 provision.go:143] copyHostCerts
	I0923 11:00:46.546078   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:00:46.546116   30645 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 11:00:46.546130   30645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:00:46.546201   30645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 11:00:46.546298   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:00:46.546318   30645 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 11:00:46.546323   30645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:00:46.546351   30645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 11:00:46.546445   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:00:46.546475   30645 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 11:00:46.546480   30645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:00:46.546519   30645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 11:00:46.546591   30645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780 san=[127.0.0.1 192.168.39.234 ha-790780 localhost minikube]
	I0923 11:00:46.722519   30645 provision.go:177] copyRemoteCerts
	I0923 11:00:46.722587   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:00:46.722614   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.725263   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.725643   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.725669   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.725886   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.726058   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.726201   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.726346   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 11:00:46.812646   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 11:00:46.812725   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:00:46.845772   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 11:00:46.845851   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 11:00:46.876414   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 11:00:46.876487   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:00:46.907338   30645 provision.go:87] duration metric: took 368.161257ms to configureAuth
	I0923 11:00:46.907368   30645 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:00:46.907647   30645 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:00:46.907731   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.910339   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.910701   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.910722   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.910935   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.911099   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.911217   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.911384   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.911628   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:00:46.911821   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:00:46.911837   30645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 11:02:17.730123   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 11:02:17.730166   30645 machine.go:96] duration metric: took 1m31.556909446s to provisionDockerMachine
	I0923 11:02:17.730186   30645 start.go:293] postStartSetup for "ha-790780" (driver="kvm2")
	I0923 11:02:17.730200   30645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:02:17.730223   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:17.730524   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:02:17.730554   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:17.733871   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.734312   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:17.734341   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.734490   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:17.734655   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:17.734815   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:17.734926   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 11:02:17.820838   30645 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:02:17.824978   30645 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:02:17.825009   30645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 11:02:17.825078   30645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 11:02:17.825152   30645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 11:02:17.825162   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 11:02:17.825262   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:02:17.834787   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:02:17.859684   30645 start.go:296] duration metric: took 129.482031ms for postStartSetup
	I0923 11:02:17.859731   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:17.860060   30645 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0923 11:02:17.860093   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:17.863041   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.863532   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:17.863565   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.863774   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:17.864024   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:17.864195   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:17.864366   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	W0923 11:02:17.956857   30645 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0923 11:02:17.956889   30645 fix.go:56] duration metric: took 1m31.805160639s for fixHost
	I0923 11:02:17.956914   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:17.959350   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.959800   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:17.959820   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.960026   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:17.960214   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:17.960383   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:17.960504   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:17.960624   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:02:17.960775   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:02:17.960785   30645 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:02:18.066378   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727089338.034432394
	
	I0923 11:02:18.066398   30645 fix.go:216] guest clock: 1727089338.034432394
	I0923 11:02:18.066406   30645 fix.go:229] Guest: 2024-09-23 11:02:18.034432394 +0000 UTC Remote: 2024-09-23 11:02:17.956897234 +0000 UTC m=+91.925852974 (delta=77.53516ms)
	I0923 11:02:18.066466   30645 fix.go:200] guest clock delta is within tolerance: 77.53516ms
	I0923 11:02:18.066473   30645 start.go:83] releasing machines lock for "ha-790780", held for 1m31.914758036s
	I0923 11:02:18.066500   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:18.066741   30645 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 11:02:18.069323   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.069769   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:18.069794   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.069984   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:18.070481   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:18.070652   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:18.070775   30645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:02:18.070818   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:18.070841   30645 ssh_runner.go:195] Run: cat /version.json
	I0923 11:02:18.070862   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:18.073329   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.073568   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.073640   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:18.073661   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.073801   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:18.073980   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:18.074079   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:18.074105   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.074133   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:18.074317   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 11:02:18.074374   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:18.074530   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:18.074658   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:18.074807   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 11:02:18.155026   30645 ssh_runner.go:195] Run: systemctl --version
	I0923 11:02:18.176484   30645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 11:02:18.341906   30645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:02:18.348019   30645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:02:18.348097   30645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:02:18.358103   30645 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:02:18.358131   30645 start.go:495] detecting cgroup driver to use...
	I0923 11:02:18.358210   30645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:02:18.375345   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:02:18.389411   30645 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:02:18.389494   30645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:02:18.403408   30645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:02:18.417609   30645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:02:18.572483   30645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:02:18.719749   30645 docker.go:233] disabling docker service ...
	I0923 11:02:18.719827   30645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:02:18.736985   30645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:02:18.750558   30645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:02:18.904237   30645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:02:19.064504   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:02:19.079601   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:02:19.098515   30645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 11:02:19.098580   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.109629   30645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 11:02:19.109710   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.120828   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.132015   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.143026   30645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:02:19.154534   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.166848   30645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.177991   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.188964   30645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:02:19.198947   30645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:02:19.208586   30645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:02:19.355796   30645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 11:02:19.589133   30645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 11:02:19.589200   30645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 11:02:19.594073   30645 start.go:563] Will wait 60s for crictl version
	I0923 11:02:19.594120   30645 ssh_runner.go:195] Run: which crictl
	I0923 11:02:19.597995   30645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:02:19.637321   30645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 11:02:19.637427   30645 ssh_runner.go:195] Run: crio --version
	I0923 11:02:19.667582   30645 ssh_runner.go:195] Run: crio --version
	I0923 11:02:19.699606   30645 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 11:02:19.701110   30645 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 11:02:19.703843   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:19.704198   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:19.704232   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:19.704442   30645 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 11:02:19.709271   30645 kubeadm.go:883] updating cluster {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:02:19.709453   30645 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:02:19.709500   30645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:02:19.753710   30645 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:02:19.753730   30645 crio.go:433] Images already preloaded, skipping extraction
	I0923 11:02:19.753775   30645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:02:19.788265   30645 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:02:19.788287   30645 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:02:19.788297   30645 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.31.1 crio true true} ...
	I0923 11:02:19.788401   30645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:02:19.788490   30645 ssh_runner.go:195] Run: crio config
	I0923 11:02:19.844387   30645 cni.go:84] Creating CNI manager for ""
	I0923 11:02:19.844411   30645 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 11:02:19.844423   30645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:02:19.844449   30645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-790780 NodeName:ha-790780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:02:19.844568   30645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-790780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:02:19.844584   30645 kube-vip.go:115] generating kube-vip config ...
	I0923 11:02:19.844621   30645 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 11:02:19.856144   30645 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 11:02:19.856254   30645 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 11:02:19.856307   30645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:02:19.865988   30645 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:02:19.866077   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 11:02:19.875936   30645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 11:02:19.892421   30645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:02:19.912476   30645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 11:02:19.929396   30645 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 11:02:19.946332   30645 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 11:02:19.959472   30645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:02:20.189897   30645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:02:20.303987   30645 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.234
	I0923 11:02:20.304012   30645 certs.go:194] generating shared ca certs ...
	I0923 11:02:20.304027   30645 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:02:20.304221   30645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 11:02:20.304291   30645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 11:02:20.304303   30645 certs.go:256] generating profile certs ...
	I0923 11:02:20.304435   30645 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 11:02:20.304469   30645 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.a3101b31
	I0923 11:02:20.304482   30645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.a3101b31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.43 192.168.39.128 192.168.39.254]
	I0923 11:02:20.455240   30645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.a3101b31 ...
	I0923 11:02:20.455273   30645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.a3101b31: {Name:mkdd13263d411ac22153f0ed73b22b324c896e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:02:20.455440   30645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.a3101b31 ...
	I0923 11:02:20.455485   30645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.a3101b31: {Name:mk70b0a21264793d843e117e3484249727f08088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:02:20.455570   30645 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.a3101b31 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 11:02:20.455706   30645 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.a3101b31 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 11:02:20.455832   30645 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 11:02:20.455848   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 11:02:20.455862   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 11:02:20.455874   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 11:02:20.455888   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 11:02:20.455898   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 11:02:20.455910   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 11:02:20.455919   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 11:02:20.455932   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 11:02:20.455994   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 11:02:20.456027   30645 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 11:02:20.456034   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:02:20.456055   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:02:20.456079   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:02:20.456100   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 11:02:20.456136   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:02:20.456161   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 11:02:20.456175   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:02:20.456186   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 11:02:20.456709   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:02:20.566604   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:02:20.799900   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:02:20.899029   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:02:21.061537   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0923 11:02:21.286848   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:02:21.363403   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:02:21.447641   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:02:21.529645   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 11:02:21.576218   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:02:21.609568   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 11:02:21.649302   30645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:02:21.670631   30645 ssh_runner.go:195] Run: openssl version
	I0923 11:02:21.677922   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 11:02:21.692951   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 11:02:21.698588   30645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:02:21.698652   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 11:02:21.706590   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:02:21.719760   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:02:21.734532   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:02:21.739531   30645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:02:21.739578   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:02:21.747343   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:02:21.762983   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 11:02:21.776333   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 11:02:21.782372   30645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:02:21.782425   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 11:02:21.789661   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 11:02:21.803779   30645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:02:21.809064   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:02:21.816048   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:02:21.825779   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:02:21.832682   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:02:21.839864   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:02:21.847520   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:02:21.856771   30645 kubeadm.go:392] StartCluster: {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:02:21.856882   30645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 11:02:21.856924   30645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:02:21.962509   30645 cri.go:89] found id: "f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212"
	I0923 11:02:21.962531   30645 cri.go:89] found id: "d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55"
	I0923 11:02:21.962535   30645 cri.go:89] found id: "c56c3580874be035e042518b502515665df5360bd21ae78b62026beabcae7cc6"
	I0923 11:02:21.962538   30645 cri.go:89] found id: "f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162"
	I0923 11:02:21.962541   30645 cri.go:89] found id: "4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298"
	I0923 11:02:21.962544   30645 cri.go:89] found id: "75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff"
	I0923 11:02:21.962546   30645 cri.go:89] found id: "83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b"
	I0923 11:02:21.962549   30645 cri.go:89] found id: "b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f"
	I0923 11:02:21.962551   30645 cri.go:89] found id: "22204bd495b03e28187d9154549a73a14b2715e53031cb7d2d6badcf29089638"
	I0923 11:02:21.962556   30645 cri.go:89] found id: "69655118ed4c82e8855377fae7bba4bbb2d8d9dd41da544be8d93bd0f03ec0e6"
	I0923 11:02:21.962558   30645 cri.go:89] found id: "be801ba2348da0180c4bcd4aac4fe465b20bbc3011e3dd67c0fb8b1c18034949"
	I0923 11:02:21.962560   30645 cri.go:89] found id: "fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc"
	I0923 11:02:21.962563   30645 cri.go:89] found id: "8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927"
	I0923 11:02:21.962565   30645 cri.go:89] found id: "20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5"
	I0923 11:02:21.962571   30645 cri.go:89] found id: "70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9"
	I0923 11:02:21.962575   30645 cri.go:89] found id: "579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e"
	I0923 11:02:21.962578   30645 cri.go:89] found id: "4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d"
	I0923 11:02:21.962582   30645 cri.go:89] found id: "621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989"
	I0923 11:02:21.962584   30645 cri.go:89] found id: ""
	I0923 11:02:21.962624   30645 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.676758194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089537676735141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b82fb99a-b5c4-49d7-a501-6b43840b3129 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.677307457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88bcb9c1-3fc4-4d32-8781-a2bfb8c88197 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.677410695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88bcb9c1-3fc4-4d32-8781-a2bfb8c88197 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.678155707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6edcfd8c7545c358843c96279ada162fc72dd4515d923bc5a16369f83c1a90ae,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727089424616591945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727089395611430583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727089387611528949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53890ceb98ce449571ef64a867719928aa3508176841eeeeca6f51b9e26af6ba,PodSandboxId:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727089373930776339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67e29811c4bb3ef81d02cc27f6bf28ddf6106e566834171bb426761fb53cc86,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727089370610986066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30b891529fae87ccf46fe1be63109903c0ea3801959e8b4bdfdab925e03572,PodSandboxId:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727089356210009170,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188,PodSandboxId:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727089341149197922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727089340983866611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727089341102746714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298,PodSandboxId:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340798092705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff,PodSandboxId:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727089340738944937,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162,PodSandboxId:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340818143099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b,PodSandboxId:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727089340637022976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f,PodSandboxId:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727089340594152883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727088889397828563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740832979273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740768781664,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727088728991879207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727088728409335220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727088716269218919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727088716121003401,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88bcb9c1-3fc4-4d32-8781-a2bfb8c88197 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.724460585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23d160d9-c2b8-46fc-9b30-10199a22f1b8 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.724564368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23d160d9-c2b8-46fc-9b30-10199a22f1b8 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.727107829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a924618-3c5a-42d4-92b9-db45a4b58525 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.727708362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089537727681630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a924618-3c5a-42d4-92b9-db45a4b58525 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.728337241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4282099-5adb-4798-a0b6-ad8a697e07e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.728597325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4282099-5adb-4798-a0b6-ad8a697e07e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.729136921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6edcfd8c7545c358843c96279ada162fc72dd4515d923bc5a16369f83c1a90ae,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727089424616591945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727089395611430583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727089387611528949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53890ceb98ce449571ef64a867719928aa3508176841eeeeca6f51b9e26af6ba,PodSandboxId:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727089373930776339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67e29811c4bb3ef81d02cc27f6bf28ddf6106e566834171bb426761fb53cc86,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727089370610986066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30b891529fae87ccf46fe1be63109903c0ea3801959e8b4bdfdab925e03572,PodSandboxId:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727089356210009170,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188,PodSandboxId:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727089341149197922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727089340983866611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727089341102746714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298,PodSandboxId:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340798092705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff,PodSandboxId:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727089340738944937,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162,PodSandboxId:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340818143099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b,PodSandboxId:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727089340637022976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f,PodSandboxId:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727089340594152883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727088889397828563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740832979273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740768781664,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727088728991879207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727088728409335220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727088716269218919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727088716121003401,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4282099-5adb-4798-a0b6-ad8a697e07e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.779009667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eea866cb-2fb5-4388-94b8-58d757ce684e name=/runtime.v1.RuntimeService/Version
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.779087436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eea866cb-2fb5-4388-94b8-58d757ce684e name=/runtime.v1.RuntimeService/Version
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.782091370Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10757048-bb16-4fc0-967b-d49ee96ec732 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.782915591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089537782881945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10757048-bb16-4fc0-967b-d49ee96ec732 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.784086959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b20a6e0f-4ee4-4629-9c50-37474c7aebd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.784162024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b20a6e0f-4ee4-4629-9c50-37474c7aebd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.785864927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6edcfd8c7545c358843c96279ada162fc72dd4515d923bc5a16369f83c1a90ae,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727089424616591945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727089395611430583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727089387611528949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53890ceb98ce449571ef64a867719928aa3508176841eeeeca6f51b9e26af6ba,PodSandboxId:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727089373930776339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67e29811c4bb3ef81d02cc27f6bf28ddf6106e566834171bb426761fb53cc86,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727089370610986066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30b891529fae87ccf46fe1be63109903c0ea3801959e8b4bdfdab925e03572,PodSandboxId:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727089356210009170,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188,PodSandboxId:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727089341149197922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727089340983866611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727089341102746714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298,PodSandboxId:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340798092705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff,PodSandboxId:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727089340738944937,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162,PodSandboxId:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340818143099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b,PodSandboxId:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727089340637022976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f,PodSandboxId:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727089340594152883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727088889397828563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740832979273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740768781664,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727088728991879207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727088728409335220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727088716269218919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727088716121003401,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b20a6e0f-4ee4-4629-9c50-37474c7aebd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.850726715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=120f4711-44d6-45f9-9f9f-a04ac17c89c4 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.850804640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=120f4711-44d6-45f9-9f9f-a04ac17c89c4 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.856083377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0771979d-eb2f-48ff-b67f-b255506efe05 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.856791353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089537856751769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0771979d-eb2f-48ff-b67f-b255506efe05 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.857557747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69a5f593-5fd0-4ee7-bccc-933db09af5ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.857673419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69a5f593-5fd0-4ee7-bccc-933db09af5ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:05:37 ha-790780 crio[3637]: time="2024-09-23 11:05:37.858250241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6edcfd8c7545c358843c96279ada162fc72dd4515d923bc5a16369f83c1a90ae,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727089424616591945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727089395611430583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727089387611528949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53890ceb98ce449571ef64a867719928aa3508176841eeeeca6f51b9e26af6ba,PodSandboxId:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727089373930776339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67e29811c4bb3ef81d02cc27f6bf28ddf6106e566834171bb426761fb53cc86,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727089370610986066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30b891529fae87ccf46fe1be63109903c0ea3801959e8b4bdfdab925e03572,PodSandboxId:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727089356210009170,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188,PodSandboxId:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727089341149197922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727089340983866611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727089341102746714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298,PodSandboxId:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340798092705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff,PodSandboxId:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727089340738944937,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162,PodSandboxId:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340818143099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b,PodSandboxId:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727089340637022976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f,PodSandboxId:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727089340594152883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727088889397828563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740832979273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740768781664,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727088728991879207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727088728409335220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727088716269218919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727088716121003401,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69a5f593-5fd0-4ee7-bccc-933db09af5ff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6edcfd8c7545c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   64c1265acf6cd       storage-provisioner
	86013bc9367e8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Running             kube-controller-manager   2                   8775ed754ced9       kube-controller-manager-ha-790780
	5d360ab7dc7cc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            3                   c81c26604c94a       kube-apiserver-ha-790780
	53890ceb98ce4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   891de0cca34ee       busybox-7dff88458-hmsb2
	d67e29811c4bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   64c1265acf6cd       storage-provisioner
	6d30b891529fa       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      3 minutes ago        Running             kube-vip                  0                   9f837719992a2       kube-vip-ha-790780
	13561286caf9b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago        Running             kube-proxy                1                   73e02d5cfff7f       kube-proxy-jqwtw
	f8850e49700ea       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago        Exited              kube-apiserver            2                   c81c26604c94a       kube-apiserver-ha-790780
	d656a4217f330       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago        Exited              kube-controller-manager   1                   8775ed754ced9       kube-controller-manager-ha-790780
	f10b6c5729682       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   1                   b2dc0ade55a88       coredns-7c65d6cfc9-bsbth
	4d39426c985ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   1                   3865d2a32b68d       coredns-7c65d6cfc9-vzhrs
	75a0284bb89db       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago        Running             kindnet-cni               1                   ca9f662374b7c       kindnet-5d9ww
	83ecacf23cf80       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago        Running             kube-scheduler            1                   3f1f06e5066e4       kube-scheduler-ha-790780
	b663dbbec0498       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago        Running             etcd                      1                   3bb84cae3317c       etcd-ha-790780
	7b6cdb320cb12       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   64b2fb317bf54       busybox-7dff88458-hmsb2
	fceea5af30884       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   7f70accb19994       coredns-7c65d6cfc9-vzhrs
	8f008021913ac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   61e4d18ef53ff       coredns-7c65d6cfc9-bsbth
	20dea9bfd7b93       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   12e4b7f578705       kube-proxy-jqwtw
	70e8cba43f15f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   a1aa2ae427e36       kindnet-5d9ww
	579e069dd212e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   d632e3d4755d2       kube-scheduler-ha-790780
	621532bf94f06       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   cf20e920bbbdf       etcd-ha-790780
	
	
	==> coredns [4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298] <==
	Trace[630445729]: [10.00145338s] [10.00145338s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1315314946]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:02:30.410) (total time: 10001ms):
	Trace[1315314946]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:02:40.412)
	Trace[1315314946]: [10.001380413s] [10.001380413s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927] <==
	[INFO] 10.244.2.2:50254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209403s
	[INFO] 10.244.1.2:48243 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198306s
	[INFO] 10.244.1.2:39091 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230366s
	[INFO] 10.244.1.2:49543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199975s
	[INFO] 10.244.0.4:45173 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102778s
	[INFO] 10.244.0.4:32836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736533s
	[INFO] 10.244.0.4:44659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129519s
	[INFO] 10.244.0.4:54433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098668s
	[INFO] 10.244.0.4:37772 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.2.2:43894 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134793s
	[INFO] 10.244.2.2:34604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147389s
	[INFO] 10.244.1.2:53532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242838s
	[INFO] 10.244.1.2:45804 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159901s
	[INFO] 10.244.1.2:39298 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112738s
	[INFO] 10.244.0.4:43692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093071s
	[INFO] 10.244.0.4:51414 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096722s
	[INFO] 10.244.2.2:56355 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295938s
	[INFO] 10.244.1.2:59520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142399s
	[INFO] 10.244.0.4:55347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090911s
	[INFO] 10.244.0.4:53926 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114353s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1792&timeout=6m54s&timeoutSeconds=414&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc] <==
	[INFO] 10.244.2.2:60029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181162s
	[INFO] 10.244.2.2:38618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184142s
	[INFO] 10.244.1.2:46063 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001758433s
	[INFO] 10.244.1.2:60295 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001402726s
	[INFO] 10.244.1.2:38240 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160236s
	[INFO] 10.244.1.2:41977 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113581s
	[INFO] 10.244.1.2:44892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133741s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105848s
	[INFO] 10.244.0.4:58776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144697s
	[INFO] 10.244.0.4:33311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001202009s
	[INFO] 10.244.2.2:57039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019058s
	[INFO] 10.244.2.2:57127 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153386s
	[INFO] 10.244.1.2:52843 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168874s
	[INFO] 10.244.0.4:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014121s
	[INFO] 10.244.0.4:38864 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079009s
	[INFO] 10.244.2.2:47502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158927s
	[INFO] 10.244.2.2:57106 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185408s
	[INFO] 10.244.2.2:34447 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139026s
	[INFO] 10.244.1.2:59976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015634s
	[INFO] 10.244.1.2:53446 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288738s
	[INFO] 10.244.1.2:52114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166821s
	[INFO] 10.244.0.4:54732 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099319s
	[INFO] 10.244.0.4:49290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071388s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-790780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_52_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:05:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:03:09 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:03:09 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:03:09 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:03:09 +0000   Mon, 23 Sep 2024 10:52:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-790780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4137f4910e0940f183cebcb2073b69b7
	  System UUID:                4137f491-0e09-40f1-83ce-bcb2073b69b7
	  Boot ID:                    d20b206f-6d12-4950-af76-836822976902
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmsb2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-bsbth             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-vzhrs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-790780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-5d9ww                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-790780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-790780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-jqwtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-790780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-790780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m31s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-790780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-790780 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-790780 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-790780 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Warning  ContainerGCFailed        3m36s (x2 over 4m36s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m23s (x3 over 4m12s)  kubelet          Node ha-790780 status is now: NodeNotReady
	  Normal   RegisteredNode           2m34s                  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal   RegisteredNode           2m21s                  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	
	
	Name:               ha-790780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_53_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:05:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:03:53 +0000   Mon, 23 Sep 2024 11:03:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:03:53 +0000   Mon, 23 Sep 2024 11:03:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:03:53 +0000   Mon, 23 Sep 2024 11:03:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:03:53 +0000   Mon, 23 Sep 2024 11:03:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-790780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87f6f3c7af44480934336376709a0c8
	  System UUID:                f87f6f3c-7af4-4480-9343-36376709a0c8
	  Boot ID:                    529d95b4-82c4-431d-ac12-76b1a8542c33
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hdk9n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-790780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-x2v9d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-790780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-790780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-x8fb6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-790780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-790780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m23s                  kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-790780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-790780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-790780-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  NodeNotReady             8m52s                  node-controller  Node ha-790780-m02 status is now: NodeNotReady
	  Normal  Starting                 2m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m54s (x8 over 2m54s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x8 over 2m54s)  kubelet          Node ha-790780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x7 over 2m54s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m34s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           2m21s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	
	
	Name:               ha-790780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_54_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:54:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:05:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:05:15 +0000   Mon, 23 Sep 2024 11:04:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:05:15 +0000   Mon, 23 Sep 2024 11:04:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:05:15 +0000   Mon, 23 Sep 2024 11:04:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:05:15 +0000   Mon, 23 Sep 2024 11:04:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-790780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a2525d1b15b4365a533b4fbbc7d76d5
	  System UUID:                8a2525d1-b15b-4365-a533-b4fbbc7d76d5
	  Boot ID:                    3aa35955-2cd9-4677-8bcd-b7dcac84f219
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2f4vm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-790780-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-lzbx6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-790780-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-790780-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-rqjzc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-790780-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-790780-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 37s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-790780-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal   RegisteredNode           2m34s              node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal   RegisteredNode           2m21s              node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	  Normal   NodeNotReady             113s               node-controller  Node ha-790780-m03 status is now: NodeNotReady
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 54s                kubelet          Node ha-790780-m03 has been rebooted, boot id: 3aa35955-2cd9-4677-8bcd-b7dcac84f219
	  Normal   NodeHasSufficientMemory  54s (x2 over 54s)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s (x2 over 54s)  kubelet          Node ha-790780-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s (x2 over 54s)  kubelet          Node ha-790780-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                54s                kubelet          Node ha-790780-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-790780-m03 event: Registered Node ha-790780-m03 in Controller
	
	
	Name:               ha-790780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_55_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:55:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:05:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:05:29 +0000   Mon, 23 Sep 2024 11:05:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:05:29 +0000   Mon, 23 Sep 2024 11:05:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:05:29 +0000   Mon, 23 Sep 2024 11:05:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:05:29 +0000   Mon, 23 Sep 2024 11:05:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-790780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8bb8bb71d764d5397c864a970ca06f0
	  System UUID:                a8bb8bb7-1d76-4d53-97c8-64a970ca06f0
	  Boot ID:                    f9dcb2f1-92f9-4730-90bf-ce863aaad94d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sz6cc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-58k4g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-790780-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   NodeReady                9m52s              kubelet          Node ha-790780-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m34s              node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   RegisteredNode           2m21s              node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   NodeNotReady             113s               node-controller  Node ha-790780-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-790780-m04 has been rebooted, boot id: f9dcb2f1-92f9-4730-90bf-ce863aaad94d
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-790780-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-790780-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-790780-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-790780-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s                 kubelet          Node ha-790780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.609594] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.519719] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055679] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057192] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.186843] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114356] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.269409] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.949380] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.106869] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.060266] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 10:52] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.081963] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.787202] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.501695] kauditd_printk_skb: 41 callbacks suppressed
	[Sep23 10:53] kauditd_printk_skb: 26 callbacks suppressed
	[Sep23 11:02] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.147061] systemd-fstab-generator[3573]: Ignoring "noauto" option for root device
	[  +0.186029] systemd-fstab-generator[3587]: Ignoring "noauto" option for root device
	[  +0.154246] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +0.296440] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.828744] systemd-fstab-generator[3768]: Ignoring "noauto" option for root device
	[ +16.115528] kauditd_printk_skb: 218 callbacks suppressed
	[Sep23 11:03] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989] <==
	{"level":"warn","ts":"2024-09-23T11:00:47.074851Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T11:00:39.294351Z","time spent":"7.780488095s","remote":"127.0.0.1:44930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	2024/09/23 11:00:47 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-23T11:00:47.201462Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.234:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:00:47.201526Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.234:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T11:00:47.201602Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"de9917ec5c740094","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-23T11:00:47.201815Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.201853Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.201879Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202040Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202123Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202217Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202246Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202253Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202263Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202301Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202337Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202416Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202462Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202491Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.205777Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"warn","ts":"2024-09-23T11:00:47.205868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.949231322s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-23T11:00:47.205890Z","caller":"traceutil/trace.go:171","msg":"trace[218702658] range","detail":"{range_begin:; range_end:; }","duration":"8.949266718s","start":"2024-09-23T11:00:38.256616Z","end":"2024-09-23T11:00:47.205883Z","steps":["trace[218702658] 'agreement among raft nodes before linearized reading'  (duration: 8.949230156s)"],"step_count":1}
	{"level":"error","ts":"2024-09-23T11:00:47.205956Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-23T11:00:47.206023Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2024-09-23T11:00:47.206762Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-790780","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.234:2380"],"advertise-client-urls":["https://192.168.39.234:2379"]}
	
	
	==> etcd [b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f] <==
	{"level":"warn","ts":"2024-09-23T11:04:40.470936Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.128:2380/version","remote-member-id":"147b37cffd14ab5b","error":"Get \"https://192.168.39.128:2380/version\": dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:40.471005Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"147b37cffd14ab5b","error":"Get \"https://192.168.39.128:2380/version\": dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:42.015605Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"147b37cffd14ab5b","rtt":"0s","error":"dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:42.015647Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"147b37cffd14ab5b","rtt":"0s","error":"dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:44.472222Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.128:2380/version","remote-member-id":"147b37cffd14ab5b","error":"Get \"https://192.168.39.128:2380/version\": dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:44.472483Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"147b37cffd14ab5b","error":"Get \"https://192.168.39.128:2380/version\": dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:47.016209Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"147b37cffd14ab5b","rtt":"0s","error":"dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:47.016429Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"147b37cffd14ab5b","rtt":"0s","error":"dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:48.474834Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.128:2380/version","remote-member-id":"147b37cffd14ab5b","error":"Get \"https://192.168.39.128:2380/version\": dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:48.474919Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"147b37cffd14ab5b","error":"Get \"https://192.168.39.128:2380/version\": dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:49.858905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.963522ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:04:49.858985Z","caller":"traceutil/trace.go:171","msg":"trace[2098877886] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2374; }","duration":"113.089301ms","start":"2024-09-23T11:04:49.745871Z","end":"2024-09-23T11:04:49.858961Z","steps":["trace[2098877886] 'range keys from in-memory index tree'  (duration: 112.943851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:04:52.017407Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"147b37cffd14ab5b","rtt":"0s","error":"dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:52.017490Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"147b37cffd14ab5b","rtt":"0s","error":"dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:52.477029Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.128:2380/version","remote-member-id":"147b37cffd14ab5b","error":"Get \"https://192.168.39.128:2380/version\": dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-23T11:04:52.477145Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"147b37cffd14ab5b","error":"Get \"https://192.168.39.128:2380/version\": dial tcp 192.168.39.128:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-23T11:04:53.863036Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:04:53.863280Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:04:53.863430Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:04:53.868960Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"de9917ec5c740094","to":"147b37cffd14ab5b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-23T11:04:53.869014Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:04:53.878136Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"de9917ec5c740094","to":"147b37cffd14ab5b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-23T11:04:53.878183Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"warn","ts":"2024-09-23T11:04:55.480233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.232074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T11:04:55.480311Z","caller":"traceutil/trace.go:171","msg":"trace[654632820] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:2396; }","duration":"118.336336ms","start":"2024-09-23T11:04:55.361960Z","end":"2024-09-23T11:04:55.480296Z","steps":["trace[654632820] 'count revisions from in-memory index tree'  (duration: 117.372201ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:05:38 up 14 min,  0 users,  load average: 0.60, 0.38, 0.27
	Linux ha-790780 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9] <==
	I0923 11:00:19.674787       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:00:19.674841       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:00:19.674977       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:00:19.674984       1 main.go:299] handling current node
	I0923 11:00:19.674995       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:00:19.674999       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:00:19.675057       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:00:19.675082       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:00:29.676582       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:00:29.676657       1 main.go:299] handling current node
	I0923 11:00:29.676676       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:00:29.676695       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:00:29.676852       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:00:29.676877       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:00:29.677003       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:00:29.677041       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:00:39.675498       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:00:39.675549       1 main.go:299] handling current node
	I0923 11:00:39.675580       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:00:39.675589       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:00:39.675745       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:00:39.675769       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:00:39.675838       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:00:39.675867       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	E0923 11:00:45.279537       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff] <==
	I0923 11:05:02.269246       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:05:12.275174       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:05:12.275207       1 main.go:299] handling current node
	I0923 11:05:12.275221       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:05:12.275226       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:05:12.275342       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:05:12.275424       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:05:12.275500       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:05:12.275522       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:05:22.268342       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:05:22.268469       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:05:22.268614       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:05:22.268640       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:05:22.268693       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:05:22.268714       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:05:22.268761       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:05:22.268783       1 main.go:299] handling current node
	I0923 11:05:32.267770       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:05:32.267863       1 main.go:299] handling current node
	I0923 11:05:32.267912       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:05:32.267952       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:05:32.268067       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:05:32.268091       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:05:32.268200       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:05:32.268236       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6] <==
	I0923 11:03:09.763590       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0923 11:03:09.837729       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:03:09.837820       1 policy_source.go:224] refreshing policies
	I0923 11:03:09.841235       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:03:09.843472       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 11:03:09.849884       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:03:09.857632       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 11:03:09.859629       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 11:03:09.863421       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 11:03:09.863768       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 11:03:09.863848       1 aggregator.go:171] initial CRD sync complete...
	I0923 11:03:09.863885       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 11:03:09.863908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 11:03:09.863932       1 cache.go:39] Caches are synced for autoregister controller
	I0923 11:03:09.865006       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 11:03:09.865059       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 11:03:09.865717       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0923 11:03:09.870693       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.43]
	I0923 11:03:09.872889       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 11:03:09.880892       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0923 11:03:09.887458       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0923 11:03:09.924504       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 11:03:10.760941       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0923 11:03:11.301718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.234 192.168.39.43]
	W0923 11:03:21.434866       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.234 192.168.39.43]
	
	
	==> kube-apiserver [f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212] <==
	I0923 11:02:21.860015       1 options.go:228] external host was not specified, using 192.168.39.234
	I0923 11:02:21.864300       1 server.go:142] Version: v1.31.1
	I0923 11:02:21.864892       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:02:22.747941       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0923 11:02:22.755308       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:02:22.760310       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0923 11:02:22.760445       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0923 11:02:22.760795       1 instance.go:232] Using reconciler: lease
	W0923 11:02:42.744237       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0923 11:02:42.747215       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0923 11:02:42.761929       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e] <==
	I0923 11:03:45.042963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:03:45.117671       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.02947ms"
	I0923 11:03:45.118002       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="119.596µs"
	I0923 11:03:48.057568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m03"
	I0923 11:03:50.274785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:03:53.799445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m02"
	I0923 11:03:58.135312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:04:00.353296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m03"
	I0923 11:04:10.584130       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-wgvk2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-wgvk2\": the object has been modified; please apply your changes to the latest version and try again"
	I0923 11:04:10.584524       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"be42ccff-a0a4-4bbe-96bb-28d63ca9743d", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-wgvk2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-wgvk2": the object has been modified; please apply your changes to the latest version and try again
	I0923 11:04:10.628438       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.613675ms"
	I0923 11:04:10.628596       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="94.88µs"
	I0923 11:04:44.607724       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m03"
	I0923 11:04:44.635866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m03"
	I0923 11:04:45.236159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m03"
	I0923 11:04:45.657567       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.437µs"
	I0923 11:04:59.705720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:04:59.781271       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:05:04.864727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.741253ms"
	I0923 11:05:04.864931       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.015µs"
	I0923 11:05:15.233089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m03"
	I0923 11:05:29.520335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:05:29.521320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-790780-m04"
	I0923 11:05:29.541789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:05:29.731206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	
	
	==> kube-controller-manager [d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55] <==
	I0923 11:02:22.320684       1 serving.go:386] Generated self-signed cert in-memory
	I0923 11:02:22.796463       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0923 11:02:22.796506       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:02:22.798067       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0923 11:02:22.798565       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 11:02:22.798823       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 11:02:22.798897       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0923 11:02:43.767930       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.234:8443/healthz\": dial tcp 192.168.39.234:8443: connect: connection refused"
	
	
	==> kube-proxy [13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:02:23.812815       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:02:26.885053       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:02:29.957452       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:02:36.102598       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:02:45.316739       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:03:06.820896       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0923 11:03:06.821059       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0923 11:03:06.821183       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:03:06.860844       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:03:06.860993       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:03:06.861070       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:03:06.863827       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:03:06.864302       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:03:06.864433       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:03:06.866605       1 config.go:199] "Starting service config controller"
	I0923 11:03:06.866679       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:03:06.866744       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:03:06.866773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:03:06.868115       1 config.go:328] "Starting node config controller"
	I0923 11:03:06.868159       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:03:09.167603       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:03:09.167707       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:03:09.168508       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5] <==
	E0923 10:59:34.854183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:34.854273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:34.854427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:37.924543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:37.924828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:41.000193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:41.000329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:44.068818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:44.068886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:44.069038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:44.069102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:53.284695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759": dial tcp 192.168.39.254:8443: connect: no route to host
	W0923 10:59:53.285283       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:53.285545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0923 10:59:53.285060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:56.355932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:56.356010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 11:00:08.644459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 11:00:08.644675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 11:00:14.788258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 11:00:14.788343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 11:00:20.932658       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 11:00:20.933001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 11:00:42.436271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 11:00:42.436501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e] <==
	E0923 10:55:25.178321       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-58k4g\": pod kube-proxy-58k4g is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-58k4g"
	E0923 10:55:25.223677       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.224053       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 143d16c9-72ab-4693-86a9-227280e3d88b(kube-system/kindnet-rhmrv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rhmrv"
	E0923 10:55:25.224238       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-rhmrv"
	I0923 10:55:25.224407       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.257675       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.257807       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 20bf7e97-ed43-402a-b267-4c1d2f4b5bbf(kube-system/kindnet-sz6cc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sz6cc"
	E0923 10:55:25.257863       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-sz6cc"
	I0923 10:55:25.257906       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.260301       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	E0923 10:55:25.260462       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6f2d4b5-c6d7-4f34-b81a-2644640ae3bb(kube-system/kube-proxy-ghvw7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvw7"
	E0923 10:55:25.260529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-ghvw7"
	I0923 10:55:25.260575       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	E0923 11:00:38.412750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0923 11:00:40.170615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0923 11:00:41.294007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0923 11:00:41.338093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0923 11:00:42.348445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0923 11:00:42.606563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0923 11:00:42.762867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0923 11:00:44.038706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0923 11:00:44.725643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0923 11:00:45.118576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0923 11:00:46.454627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0923 11:00:47.034899       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b] <==
	W0923 11:02:59.272232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.234:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:02:59.272273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.234:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:00.149262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.234:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:00.149480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.234:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:00.442834       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.234:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:00.442964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.234:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:01.100879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.234:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:01.101052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.234:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:01.161167       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.234:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:01.161308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.234:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:01.696105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.234:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:01.696177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.234:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:02.177145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.234:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:02.177253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.234:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:02.375884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.234:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:02.376034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.234:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:02.482342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.234:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:02.482552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.234:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:03.582725       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.234:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:03.582859       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.234:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:04.164305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.234:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:04.164452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.234:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:05.353441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.234:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:05.353506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.234:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	I0923 11:03:20.675908       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:04:12 ha-790780 kubelet[1310]: I0923 11:04:12.636020    1310 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-790780"
	Sep 23 11:04:12 ha-790780 kubelet[1310]: E0923 11:04:12.852945    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089452852502634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:12 ha-790780 kubelet[1310]: E0923 11:04:12.853048    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089452852502634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:22 ha-790780 kubelet[1310]: I0923 11:04:22.624778    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-790780" podStartSLOduration=10.624518486 podStartE2EDuration="10.624518486s" podCreationTimestamp="2024-09-23 11:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-23 11:04:22.620625355 +0000 UTC m=+740.187507901" watchObservedRunningTime="2024-09-23 11:04:22.624518486 +0000 UTC m=+740.191401038"
	Sep 23 11:04:22 ha-790780 kubelet[1310]: E0923 11:04:22.854224    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089462853845428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:22 ha-790780 kubelet[1310]: E0923 11:04:22.854288    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089462853845428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:32 ha-790780 kubelet[1310]: E0923 11:04:32.858347    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089472857287252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:32 ha-790780 kubelet[1310]: E0923 11:04:32.858521    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089472857287252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:42 ha-790780 kubelet[1310]: E0923 11:04:42.860511    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089482860021943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:42 ha-790780 kubelet[1310]: E0923 11:04:42.860550    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089482860021943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:52 ha-790780 kubelet[1310]: E0923 11:04:52.865329    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089492862325582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:04:52 ha-790780 kubelet[1310]: E0923 11:04:52.865436    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089492862325582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:05:02 ha-790780 kubelet[1310]: E0923 11:05:02.632056    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 11:05:02 ha-790780 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 11:05:02 ha-790780 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 11:05:02 ha-790780 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 11:05:02 ha-790780 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 11:05:02 ha-790780 kubelet[1310]: E0923 11:05:02.867623    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089502866818086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:05:02 ha-790780 kubelet[1310]: E0923 11:05:02.867659    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089502866818086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:05:12 ha-790780 kubelet[1310]: E0923 11:05:12.871143    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089512870476328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:05:12 ha-790780 kubelet[1310]: E0923 11:05:12.871670    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089512870476328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:05:22 ha-790780 kubelet[1310]: E0923 11:05:22.874096    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089522873782968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:05:22 ha-790780 kubelet[1310]: E0923 11:05:22.874137    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089522873782968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:05:32 ha-790780 kubelet[1310]: E0923 11:05:32.875973    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089532875642262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:05:32 ha-790780 kubelet[1310]: E0923 11:05:32.876473    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089532875642262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:05:37.372742   32137 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19689-3961/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-790780 -n ha-790780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-790780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (415.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 stop -v=7 --alsologtostderr
E0923 11:05:57.439845   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-790780 stop -v=7 --alsologtostderr: exit status 82 (2m0.459877861s)

                                                
                                                
-- stdout --
	* Stopping node "ha-790780-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:05:56.747152   32577 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:05:56.747388   32577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:05:56.747396   32577 out.go:358] Setting ErrFile to fd 2...
	I0923 11:05:56.747400   32577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:05:56.747587   32577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:05:56.747803   32577 out.go:352] Setting JSON to false
	I0923 11:05:56.747875   32577 mustload.go:65] Loading cluster: ha-790780
	I0923 11:05:56.748243   32577 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:05:56.748320   32577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 11:05:56.748496   32577 mustload.go:65] Loading cluster: ha-790780
	I0923 11:05:56.748619   32577 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:05:56.748644   32577 stop.go:39] StopHost: ha-790780-m04
	I0923 11:05:56.748999   32577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:05:56.749036   32577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:05:56.764137   32577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
	I0923 11:05:56.764634   32577 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:05:56.765241   32577 main.go:141] libmachine: Using API Version  1
	I0923 11:05:56.765266   32577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:05:56.765612   32577 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:05:56.767973   32577 out.go:177] * Stopping node "ha-790780-m04"  ...
	I0923 11:05:56.769095   32577 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0923 11:05:56.769130   32577 main.go:141] libmachine: (ha-790780-m04) Calling .DriverName
	I0923 11:05:56.769396   32577 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0923 11:05:56.769427   32577 main.go:141] libmachine: (ha-790780-m04) Calling .GetSSHHostname
	I0923 11:05:56.771989   32577 main.go:141] libmachine: (ha-790780-m04) DBG | domain ha-790780-m04 has defined MAC address 52:54:00:3a:9e:f2 in network mk-ha-790780
	I0923 11:05:56.772404   32577 main.go:141] libmachine: (ha-790780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:9e:f2", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 12:05:24 +0000 UTC Type:0 Mac:52:54:00:3a:9e:f2 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-790780-m04 Clientid:01:52:54:00:3a:9e:f2}
	I0923 11:05:56.772448   32577 main.go:141] libmachine: (ha-790780-m04) DBG | domain ha-790780-m04 has defined IP address 192.168.39.134 and MAC address 52:54:00:3a:9e:f2 in network mk-ha-790780
	I0923 11:05:56.772584   32577 main.go:141] libmachine: (ha-790780-m04) Calling .GetSSHPort
	I0923 11:05:56.772746   32577 main.go:141] libmachine: (ha-790780-m04) Calling .GetSSHKeyPath
	I0923 11:05:56.772884   32577 main.go:141] libmachine: (ha-790780-m04) Calling .GetSSHUsername
	I0923 11:05:56.772996   32577 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780-m04/id_rsa Username:docker}
	I0923 11:05:56.855846   32577 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0923 11:05:56.909320   32577 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0923 11:05:56.961971   32577 main.go:141] libmachine: Stopping "ha-790780-m04"...
	I0923 11:05:56.961997   32577 main.go:141] libmachine: (ha-790780-m04) Calling .GetState
	I0923 11:05:56.963593   32577 main.go:141] libmachine: (ha-790780-m04) Calling .Stop
	I0923 11:05:56.967056   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 0/120
	I0923 11:05:57.968907   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 1/120
	I0923 11:05:58.970103   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 2/120
	I0923 11:05:59.971432   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 3/120
	I0923 11:06:00.972886   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 4/120
	I0923 11:06:01.974693   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 5/120
	I0923 11:06:02.976255   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 6/120
	I0923 11:06:03.978103   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 7/120
	I0923 11:06:04.979864   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 8/120
	I0923 11:06:05.981236   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 9/120
	I0923 11:06:06.983227   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 10/120
	I0923 11:06:07.984373   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 11/120
	I0923 11:06:08.985640   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 12/120
	I0923 11:06:09.987707   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 13/120
	I0923 11:06:10.989208   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 14/120
	I0923 11:06:11.991279   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 15/120
	I0923 11:06:12.992653   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 16/120
	I0923 11:06:13.993919   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 17/120
	I0923 11:06:14.995672   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 18/120
	I0923 11:06:15.996848   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 19/120
	I0923 11:06:16.998046   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 20/120
	I0923 11:06:17.999813   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 21/120
	I0923 11:06:19.001319   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 22/120
	I0923 11:06:20.002746   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 23/120
	I0923 11:06:21.004774   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 24/120
	I0923 11:06:22.006612   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 25/120
	I0923 11:06:23.007912   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 26/120
	I0923 11:06:24.009354   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 27/120
	I0923 11:06:25.010547   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 28/120
	I0923 11:06:26.012656   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 29/120
	I0923 11:06:27.014431   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 30/120
	I0923 11:06:28.015812   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 31/120
	I0923 11:06:29.017354   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 32/120
	I0923 11:06:30.018683   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 33/120
	I0923 11:06:31.020305   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 34/120
	I0923 11:06:32.022353   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 35/120
	I0923 11:06:33.023944   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 36/120
	I0923 11:06:34.025631   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 37/120
	I0923 11:06:35.027036   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 38/120
	I0923 11:06:36.028617   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 39/120
	I0923 11:06:37.030745   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 40/120
	I0923 11:06:38.032107   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 41/120
	I0923 11:06:39.033484   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 42/120
	I0923 11:06:40.034719   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 43/120
	I0923 11:06:41.036030   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 44/120
	I0923 11:06:42.038243   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 45/120
	I0923 11:06:43.039959   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 46/120
	I0923 11:06:44.041363   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 47/120
	I0923 11:06:45.043348   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 48/120
	I0923 11:06:46.044895   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 49/120
	I0923 11:06:47.047005   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 50/120
	I0923 11:06:48.048351   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 51/120
	I0923 11:06:49.049649   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 52/120
	I0923 11:06:50.050842   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 53/120
	I0923 11:06:51.052188   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 54/120
	I0923 11:06:52.054311   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 55/120
	I0923 11:06:53.055846   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 56/120
	I0923 11:06:54.058133   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 57/120
	I0923 11:06:55.059491   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 58/120
	I0923 11:06:56.061776   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 59/120
	I0923 11:06:57.064008   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 60/120
	I0923 11:06:58.065292   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 61/120
	I0923 11:06:59.067227   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 62/120
	I0923 11:07:00.068685   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 63/120
	I0923 11:07:01.070019   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 64/120
	I0923 11:07:02.071876   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 65/120
	I0923 11:07:03.073115   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 66/120
	I0923 11:07:04.074571   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 67/120
	I0923 11:07:05.075767   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 68/120
	I0923 11:07:06.077513   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 69/120
	I0923 11:07:07.079503   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 70/120
	I0923 11:07:08.080764   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 71/120
	I0923 11:07:09.082037   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 72/120
	I0923 11:07:10.083668   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 73/120
	I0923 11:07:11.084986   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 74/120
	I0923 11:07:12.086966   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 75/120
	I0923 11:07:13.089310   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 76/120
	I0923 11:07:14.090642   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 77/120
	I0923 11:07:15.091980   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 78/120
	I0923 11:07:16.093257   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 79/120
	I0923 11:07:17.095303   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 80/120
	I0923 11:07:18.096842   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 81/120
	I0923 11:07:19.098153   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 82/120
	I0923 11:07:20.100114   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 83/120
	I0923 11:07:21.101628   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 84/120
	I0923 11:07:22.103063   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 85/120
	I0923 11:07:23.104809   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 86/120
	I0923 11:07:24.106008   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 87/120
	I0923 11:07:25.107691   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 88/120
	I0923 11:07:26.109021   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 89/120
	I0923 11:07:27.110730   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 90/120
	I0923 11:07:28.112310   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 91/120
	I0923 11:07:29.114243   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 92/120
	I0923 11:07:30.115905   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 93/120
	I0923 11:07:31.117252   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 94/120
	I0923 11:07:32.119150   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 95/120
	I0923 11:07:33.120615   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 96/120
	I0923 11:07:34.122129   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 97/120
	I0923 11:07:35.123828   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 98/120
	I0923 11:07:36.125217   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 99/120
	I0923 11:07:37.126894   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 100/120
	I0923 11:07:38.128218   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 101/120
	I0923 11:07:39.129607   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 102/120
	I0923 11:07:40.132066   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 103/120
	I0923 11:07:41.133540   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 104/120
	I0923 11:07:42.135417   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 105/120
	I0923 11:07:43.136623   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 106/120
	I0923 11:07:44.138146   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 107/120
	I0923 11:07:45.139920   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 108/120
	I0923 11:07:46.141212   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 109/120
	I0923 11:07:47.143098   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 110/120
	I0923 11:07:48.145249   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 111/120
	I0923 11:07:49.146755   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 112/120
	I0923 11:07:50.148152   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 113/120
	I0923 11:07:51.149408   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 114/120
	I0923 11:07:52.151367   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 115/120
	I0923 11:07:53.152636   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 116/120
	I0923 11:07:54.153823   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 117/120
	I0923 11:07:55.155737   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 118/120
	I0923 11:07:56.157208   32577 main.go:141] libmachine: (ha-790780-m04) Waiting for machine to stop 119/120
	I0923 11:07:57.157813   32577 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0923 11:07:57.157876   32577 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0923 11:07:57.159329   32577 out.go:201] 
	W0923 11:07:57.160443   32577 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0923 11:07:57.160466   32577 out.go:270] * 
	* 
	W0923 11:07:57.162764   32577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 11:07:57.164039   32577 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-790780 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr: (18.831896498s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-790780 -n ha-790780
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 logs -n 25: (1.712235482s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m04 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp testdata/cp-test.txt                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780:/home/docker/cp-test_ha-790780-m04_ha-790780.txt                      |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780 sudo cat                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780.txt                                |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m02:/home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m02 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m03:/home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n                                                                | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | ha-790780-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-790780 ssh -n ha-790780-m03 sudo cat                                         | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | /home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-790780 node stop m02 -v=7                                                    | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-790780 node start m02 -v=7                                                   | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:58 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-790780 -v=7                                                          | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:58 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-790780 -v=7                                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 10:58 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-790780 --wait=true -v=7                                                   | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 11:00 UTC | 23 Sep 24 11:05 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-790780                                                               | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 11:05 UTC |                     |
	| node    | ha-790780 node delete m03 -v=7                                                  | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 11:05 UTC | 23 Sep 24 11:05 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-790780 stop -v=7                                                             | ha-790780 | jenkins | v1.34.0 | 23 Sep 24 11:05 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:00:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:00:46.064406   30645 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:00:46.064645   30645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:00:46.064654   30645 out.go:358] Setting ErrFile to fd 2...
	I0923 11:00:46.064658   30645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:00:46.064828   30645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:00:46.065338   30645 out.go:352] Setting JSON to false
	I0923 11:00:46.066226   30645 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2589,"bootTime":1727086657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:00:46.066317   30645 start.go:139] virtualization: kvm guest
	I0923 11:00:46.068495   30645 out.go:177] * [ha-790780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:00:46.069866   30645 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:00:46.069875   30645 notify.go:220] Checking for updates...
	I0923 11:00:46.072176   30645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:00:46.073500   30645 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:00:46.074669   30645 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:00:46.075743   30645 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:00:46.077023   30645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:00:46.078681   30645 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:00:46.078766   30645 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:00:46.079183   30645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:00:46.079227   30645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:00:46.093942   30645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I0923 11:00:46.094355   30645 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:00:46.094816   30645 main.go:141] libmachine: Using API Version  1
	I0923 11:00:46.094832   30645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:00:46.095251   30645 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:00:46.095445   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:00:46.129327   30645 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 11:00:46.130722   30645 start.go:297] selected driver: kvm2
	I0923 11:00:46.130737   30645 start.go:901] validating driver "kvm2" against &{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:00:46.130877   30645 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:00:46.131244   30645 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:00:46.131332   30645 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:00:46.145982   30645 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:00:46.146672   30645 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:00:46.146704   30645 cni.go:84] Creating CNI manager for ""
	I0923 11:00:46.146766   30645 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 11:00:46.146850   30645 start.go:340] cluster config:
	{Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:00:46.147043   30645 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:00:46.149786   30645 out.go:177] * Starting "ha-790780" primary control-plane node in "ha-790780" cluster
	I0923 11:00:46.151102   30645 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:00:46.151155   30645 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 11:00:46.151168   30645 cache.go:56] Caching tarball of preloaded images
	I0923 11:00:46.151255   30645 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 11:00:46.151267   30645 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 11:00:46.151429   30645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/config.json ...
	I0923 11:00:46.151645   30645 start.go:360] acquireMachinesLock for ha-790780: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:00:46.151703   30645 start.go:364] duration metric: took 36.766µs to acquireMachinesLock for "ha-790780"
	I0923 11:00:46.151722   30645 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:00:46.151729   30645 fix.go:54] fixHost starting: 
	I0923 11:00:46.151985   30645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:00:46.152022   30645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:00:46.166913   30645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0923 11:00:46.167278   30645 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:00:46.167671   30645 main.go:141] libmachine: Using API Version  1
	I0923 11:00:46.167685   30645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:00:46.168025   30645 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:00:46.168190   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:00:46.168326   30645 main.go:141] libmachine: (ha-790780) Calling .GetState
	I0923 11:00:46.169976   30645 fix.go:112] recreateIfNeeded on ha-790780: state=Running err=<nil>
	W0923 11:00:46.169996   30645 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:00:46.171880   30645 out.go:177] * Updating the running kvm2 "ha-790780" VM ...
	I0923 11:00:46.173241   30645 machine.go:93] provisionDockerMachine start ...
	I0923 11:00:46.173264   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:00:46.173466   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.175686   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.176082   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.176103   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.176267   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.176440   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.176592   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.176733   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.176885   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:00:46.177078   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:00:46.177088   30645 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:00:46.286735   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780
	
	I0923 11:00:46.286767   30645 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 11:00:46.287006   30645 buildroot.go:166] provisioning hostname "ha-790780"
	I0923 11:00:46.287028   30645 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 11:00:46.287222   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.290117   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.290470   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.290494   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.290689   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.290854   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.291004   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.291143   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.291264   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:00:46.291441   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:00:46.291455   30645 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-790780 && echo "ha-790780" | sudo tee /etc/hostname
	I0923 11:00:46.419999   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-790780
	
	I0923 11:00:46.420021   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.422746   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.423161   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.423190   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.423324   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.423507   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.423708   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.423871   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.424019   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:00:46.424299   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:00:46.424321   30645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-790780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-790780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-790780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:00:46.539065   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:00:46.539090   30645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 11:00:46.539130   30645 buildroot.go:174] setting up certificates
	I0923 11:00:46.539157   30645 provision.go:84] configureAuth start
	I0923 11:00:46.539186   30645 main.go:141] libmachine: (ha-790780) Calling .GetMachineName
	I0923 11:00:46.539468   30645 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 11:00:46.542430   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.542796   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.542824   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.542977   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.545452   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.545864   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.545892   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.546051   30645 provision.go:143] copyHostCerts
	I0923 11:00:46.546078   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:00:46.546116   30645 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 11:00:46.546130   30645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:00:46.546201   30645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 11:00:46.546298   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:00:46.546318   30645 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 11:00:46.546323   30645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:00:46.546351   30645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 11:00:46.546445   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:00:46.546475   30645 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 11:00:46.546480   30645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:00:46.546519   30645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 11:00:46.546591   30645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.ha-790780 san=[127.0.0.1 192.168.39.234 ha-790780 localhost minikube]
	I0923 11:00:46.722519   30645 provision.go:177] copyRemoteCerts
	I0923 11:00:46.722587   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:00:46.722614   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.725263   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.725643   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.725669   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.725886   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.726058   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.726201   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.726346   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 11:00:46.812646   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 11:00:46.812725   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:00:46.845772   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 11:00:46.845851   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 11:00:46.876414   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 11:00:46.876487   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:00:46.907338   30645 provision.go:87] duration metric: took 368.161257ms to configureAuth
	I0923 11:00:46.907368   30645 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:00:46.907647   30645 config.go:182] Loaded profile config "ha-790780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:00:46.907731   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:00:46.910339   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.910701   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:00:46.910722   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:00:46.910935   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:00:46.911099   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.911217   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:00:46.911384   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:00:46.911628   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:00:46.911821   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:00:46.911837   30645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 11:02:17.730123   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 11:02:17.730166   30645 machine.go:96] duration metric: took 1m31.556909446s to provisionDockerMachine
	I0923 11:02:17.730186   30645 start.go:293] postStartSetup for "ha-790780" (driver="kvm2")
	I0923 11:02:17.730200   30645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:02:17.730223   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:17.730524   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:02:17.730554   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:17.733871   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.734312   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:17.734341   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.734490   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:17.734655   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:17.734815   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:17.734926   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 11:02:17.820838   30645 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:02:17.824978   30645 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:02:17.825009   30645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 11:02:17.825078   30645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 11:02:17.825152   30645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 11:02:17.825162   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 11:02:17.825262   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:02:17.834787   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:02:17.859684   30645 start.go:296] duration metric: took 129.482031ms for postStartSetup
	I0923 11:02:17.859731   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:17.860060   30645 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0923 11:02:17.860093   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:17.863041   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.863532   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:17.863565   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.863774   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:17.864024   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:17.864195   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:17.864366   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	W0923 11:02:17.956857   30645 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0923 11:02:17.956889   30645 fix.go:56] duration metric: took 1m31.805160639s for fixHost
	I0923 11:02:17.956914   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:17.959350   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.959800   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:17.959820   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:17.960026   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:17.960214   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:17.960383   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:17.960504   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:17.960624   30645 main.go:141] libmachine: Using SSH client type: native
	I0923 11:02:17.960775   30645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0923 11:02:17.960785   30645 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:02:18.066378   30645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727089338.034432394
	
	I0923 11:02:18.066398   30645 fix.go:216] guest clock: 1727089338.034432394
	I0923 11:02:18.066406   30645 fix.go:229] Guest: 2024-09-23 11:02:18.034432394 +0000 UTC Remote: 2024-09-23 11:02:17.956897234 +0000 UTC m=+91.925852974 (delta=77.53516ms)
	I0923 11:02:18.066466   30645 fix.go:200] guest clock delta is within tolerance: 77.53516ms
	I0923 11:02:18.066473   30645 start.go:83] releasing machines lock for "ha-790780", held for 1m31.914758036s
	I0923 11:02:18.066500   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:18.066741   30645 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 11:02:18.069323   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.069769   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:18.069794   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.069984   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:18.070481   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:18.070652   30645 main.go:141] libmachine: (ha-790780) Calling .DriverName
	I0923 11:02:18.070775   30645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:02:18.070818   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:18.070841   30645 ssh_runner.go:195] Run: cat /version.json
	I0923 11:02:18.070862   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHHostname
	I0923 11:02:18.073329   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.073568   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.073640   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:18.073661   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.073801   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:18.073980   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:18.074079   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:18.074105   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:18.074133   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:18.074317   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 11:02:18.074374   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHPort
	I0923 11:02:18.074530   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHKeyPath
	I0923 11:02:18.074658   30645 main.go:141] libmachine: (ha-790780) Calling .GetSSHUsername
	I0923 11:02:18.074807   30645 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/ha-790780/id_rsa Username:docker}
	I0923 11:02:18.155026   30645 ssh_runner.go:195] Run: systemctl --version
	I0923 11:02:18.176484   30645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 11:02:18.341906   30645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:02:18.348019   30645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:02:18.348097   30645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:02:18.358103   30645 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:02:18.358131   30645 start.go:495] detecting cgroup driver to use...
	I0923 11:02:18.358210   30645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:02:18.375345   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:02:18.389411   30645 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:02:18.389494   30645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:02:18.403408   30645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:02:18.417609   30645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:02:18.572483   30645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:02:18.719749   30645 docker.go:233] disabling docker service ...
	I0923 11:02:18.719827   30645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:02:18.736985   30645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:02:18.750558   30645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:02:18.904237   30645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:02:19.064504   30645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:02:19.079601   30645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:02:19.098515   30645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 11:02:19.098580   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.109629   30645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 11:02:19.109710   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.120828   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.132015   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.143026   30645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:02:19.154534   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.166848   30645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.177991   30645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:02:19.188964   30645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:02:19.198947   30645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:02:19.208586   30645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:02:19.355796   30645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 11:02:19.589133   30645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 11:02:19.589200   30645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 11:02:19.594073   30645 start.go:563] Will wait 60s for crictl version
	I0923 11:02:19.594120   30645 ssh_runner.go:195] Run: which crictl
	I0923 11:02:19.597995   30645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:02:19.637321   30645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 11:02:19.637427   30645 ssh_runner.go:195] Run: crio --version
	I0923 11:02:19.667582   30645 ssh_runner.go:195] Run: crio --version
	I0923 11:02:19.699606   30645 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 11:02:19.701110   30645 main.go:141] libmachine: (ha-790780) Calling .GetIP
	I0923 11:02:19.703843   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:19.704198   30645 main.go:141] libmachine: (ha-790780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:51:7d", ip: ""} in network mk-ha-790780: {Iface:virbr1 ExpiryTime:2024-09-23 11:51:38 +0000 UTC Type:0 Mac:52:54:00:56:51:7d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-790780 Clientid:01:52:54:00:56:51:7d}
	I0923 11:02:19.704232   30645 main.go:141] libmachine: (ha-790780) DBG | domain ha-790780 has defined IP address 192.168.39.234 and MAC address 52:54:00:56:51:7d in network mk-ha-790780
	I0923 11:02:19.704442   30645 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 11:02:19.709271   30645 kubeadm.go:883] updating cluster {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:02:19.709453   30645 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:02:19.709500   30645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:02:19.753710   30645 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:02:19.753730   30645 crio.go:433] Images already preloaded, skipping extraction
	I0923 11:02:19.753775   30645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:02:19.788265   30645 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:02:19.788287   30645 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:02:19.788297   30645 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.31.1 crio true true} ...
	I0923 11:02:19.788401   30645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-790780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:02:19.788490   30645 ssh_runner.go:195] Run: crio config
	I0923 11:02:19.844387   30645 cni.go:84] Creating CNI manager for ""
	I0923 11:02:19.844411   30645 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 11:02:19.844423   30645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:02:19.844449   30645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-790780 NodeName:ha-790780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:02:19.844568   30645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-790780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:02:19.844584   30645 kube-vip.go:115] generating kube-vip config ...
	I0923 11:02:19.844621   30645 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 11:02:19.856144   30645 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 11:02:19.856254   30645 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 11:02:19.856307   30645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:02:19.865988   30645 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:02:19.866077   30645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 11:02:19.875936   30645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 11:02:19.892421   30645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:02:19.912476   30645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 11:02:19.929396   30645 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 11:02:19.946332   30645 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 11:02:19.959472   30645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:02:20.189897   30645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:02:20.303987   30645 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780 for IP: 192.168.39.234
	I0923 11:02:20.304012   30645 certs.go:194] generating shared ca certs ...
	I0923 11:02:20.304027   30645 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:02:20.304221   30645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 11:02:20.304291   30645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 11:02:20.304303   30645 certs.go:256] generating profile certs ...
	I0923 11:02:20.304435   30645 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/client.key
	I0923 11:02:20.304469   30645 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.a3101b31
	I0923 11:02:20.304482   30645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.a3101b31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234 192.168.39.43 192.168.39.128 192.168.39.254]
	I0923 11:02:20.455240   30645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.a3101b31 ...
	I0923 11:02:20.455273   30645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.a3101b31: {Name:mkdd13263d411ac22153f0ed73b22b324c896e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:02:20.455440   30645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.a3101b31 ...
	I0923 11:02:20.455485   30645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.a3101b31: {Name:mk70b0a21264793d843e117e3484249727f08088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:02:20.455570   30645 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt.a3101b31 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt
	I0923 11:02:20.455706   30645 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key.a3101b31 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key
	I0923 11:02:20.455832   30645 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key
	I0923 11:02:20.455848   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 11:02:20.455862   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 11:02:20.455874   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 11:02:20.455888   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 11:02:20.455898   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 11:02:20.455910   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 11:02:20.455919   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 11:02:20.455932   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 11:02:20.455994   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 11:02:20.456027   30645 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 11:02:20.456034   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:02:20.456055   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:02:20.456079   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:02:20.456100   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 11:02:20.456136   30645 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:02:20.456161   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 11:02:20.456175   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:02:20.456186   30645 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 11:02:20.456709   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:02:20.566604   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:02:20.799900   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:02:20.899029   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:02:21.061537   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0923 11:02:21.286848   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:02:21.363403   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:02:21.447641   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/ha-790780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:02:21.529645   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 11:02:21.576218   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:02:21.609568   30645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 11:02:21.649302   30645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:02:21.670631   30645 ssh_runner.go:195] Run: openssl version
	I0923 11:02:21.677922   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 11:02:21.692951   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 11:02:21.698588   30645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:02:21.698652   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 11:02:21.706590   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:02:21.719760   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:02:21.734532   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:02:21.739531   30645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:02:21.739578   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:02:21.747343   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:02:21.762983   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 11:02:21.776333   30645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 11:02:21.782372   30645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:02:21.782425   30645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 11:02:21.789661   30645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 11:02:21.803779   30645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:02:21.809064   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:02:21.816048   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:02:21.825779   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:02:21.832682   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:02:21.839864   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:02:21.847520   30645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:02:21.856771   30645 kubeadm.go:392] StartCluster: {Name:ha-790780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-790780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.134 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:02:21.856882   30645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 11:02:21.856924   30645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:02:21.962509   30645 cri.go:89] found id: "f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212"
	I0923 11:02:21.962531   30645 cri.go:89] found id: "d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55"
	I0923 11:02:21.962535   30645 cri.go:89] found id: "c56c3580874be035e042518b502515665df5360bd21ae78b62026beabcae7cc6"
	I0923 11:02:21.962538   30645 cri.go:89] found id: "f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162"
	I0923 11:02:21.962541   30645 cri.go:89] found id: "4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298"
	I0923 11:02:21.962544   30645 cri.go:89] found id: "75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff"
	I0923 11:02:21.962546   30645 cri.go:89] found id: "83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b"
	I0923 11:02:21.962549   30645 cri.go:89] found id: "b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f"
	I0923 11:02:21.962551   30645 cri.go:89] found id: "22204bd495b03e28187d9154549a73a14b2715e53031cb7d2d6badcf29089638"
	I0923 11:02:21.962556   30645 cri.go:89] found id: "69655118ed4c82e8855377fae7bba4bbb2d8d9dd41da544be8d93bd0f03ec0e6"
	I0923 11:02:21.962558   30645 cri.go:89] found id: "be801ba2348da0180c4bcd4aac4fe465b20bbc3011e3dd67c0fb8b1c18034949"
	I0923 11:02:21.962560   30645 cri.go:89] found id: "fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc"
	I0923 11:02:21.962563   30645 cri.go:89] found id: "8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927"
	I0923 11:02:21.962565   30645 cri.go:89] found id: "20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5"
	I0923 11:02:21.962571   30645 cri.go:89] found id: "70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9"
	I0923 11:02:21.962575   30645 cri.go:89] found id: "579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e"
	I0923 11:02:21.962578   30645 cri.go:89] found id: "4881d47948f52ba94dac4d6aae3deded99dbee7ebfffb50582058d5e46ff039d"
	I0923 11:02:21.962582   30645 cri.go:89] found id: "621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989"
	I0923 11:02:21.962584   30645 cri.go:89] found id: ""
	I0923 11:02:21.962624   30645 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.602912667Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hmsb2,Uid:8e067811-dad7-4eae-8f9f-24b6d134c3be,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089373796842047,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:54:44.813863461Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-790780,Uid:67aed14e0871ee4d58ebb398bf32d9f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1727089356106436839,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{kubernetes.io/config.hash: 67aed14e0871ee4d58ebb398bf32d9f6,kubernetes.io/config.seen: 2024-09-23T11:02:19.914727824Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bsbth,Uid:5d308ec2-ea22-47f7-966c-9b0a4410c764,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089340251948579,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-23T10:52:20.219468289Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-vzhrs,Uid:730f9509-94d1-4b3f-b45e-bee6f2386d31,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089340246692569,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:52:20.226442275Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-790780,Uid:292a50d5f74643d055dd7bcfbab1dbaf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089340140949812,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.234:8443,kubernetes.io/config.hash: 292a50d5f74643d055dd7bcfbab1dbaf,kubernetes.io/config.seen: 2024-09-23T10:52:02.554631669Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-790780,Uid:61ebdcec6eabb6584f7929ac2d99660f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089340135348780,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,tier: control-pl
ane,},Annotations:map[string]string{kubernetes.io/config.hash: 61ebdcec6eabb6584f7929ac2d99660f,kubernetes.io/config.seen: 2024-09-23T10:52:02.554635950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&PodSandboxMetadata{Name:kindnet-5d9ww,Uid:8d6249eb-6de3-413a-8acf-3804fd05badb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089340077341366,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:52:07.068777040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&PodSandboxMetadata{Name:sto
rage-provisioner,Uid:fd672c2c-1784-44f0-adc7-e5184ddc96f9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089340074128855,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\
":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-23T10:52:20.229007087Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&PodSandboxMetadata{Name:etcd-ha-790780,Uid:15d010bbb48c46b1437d3cf7cda623bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089340059499370,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.234:2379,kubernetes.io/config.hash: 15d010bbb48c46b1437d3cf7cda623bc,kubernetes.io/config.seen: 2024-09-23T10:52:02.554626850Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&Po
dSandbox{Id:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-790780,Uid:255812681d1a0e612e49bf2f9931ab5b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089339991806928,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 255812681d1a0e612e49bf2f9931ab5b,kubernetes.io/config.seen: 2024-09-23T10:52:02.554633055Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&PodSandboxMetadata{Name:kube-proxy-jqwtw,Uid:e60edcb9-c4a2-4116-b316-cc7777aa054f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727089339983582124,Labels:map[string]string{cont
roller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T10:52:07.073572528Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=00d22151-f575-49b9-b4d9-003b67c8dde8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.603884411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8da1fd43-cc9d-469b-9163-f457cd11828d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.603940561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8da1fd43-cc9d-469b-9163-f457cd11828d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.604169322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6edcfd8c7545c358843c96279ada162fc72dd4515d923bc5a16369f83c1a90ae,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727089424616591945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727089395611430583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727089387611528949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53890ceb98ce449571ef64a867719928aa3508176841eeeeca6f51b9e26af6ba,PodSandboxId:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727089373930776339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30b891529fae87ccf46fe1be63109903c0ea3801959e8b4bdfdab925e03572,PodSandboxId:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727089356210009170,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188,PodSandboxId:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727089341149197922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298,PodSandboxId:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340798092705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff,PodSandboxId:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727089340738944937,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162,PodSandboxId:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340818143099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b,PodSandboxId:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727089340637022976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f,PodSandboxId:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727089340594152883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bb
b48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8da1fd43-cc9d-469b-9163-f457cd11828d name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.640041029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7ee302b-57f4-4569-b474-93006abcf4ea name=/runtime.v1.RuntimeService/Version
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.640454475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7ee302b-57f4-4569-b474-93006abcf4ea name=/runtime.v1.RuntimeService/Version
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.641844094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2124227-f06a-43be-a7fe-95618d098d87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.642271479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089696642248677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2124227-f06a-43be-a7fe-95618d098d87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.642986204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30b1e6fe-5ad8-43d1-a2f1-694c85d3c0ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.643109123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30b1e6fe-5ad8-43d1-a2f1-694c85d3c0ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.643799425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6edcfd8c7545c358843c96279ada162fc72dd4515d923bc5a16369f83c1a90ae,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727089424616591945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727089395611430583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727089387611528949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53890ceb98ce449571ef64a867719928aa3508176841eeeeca6f51b9e26af6ba,PodSandboxId:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727089373930776339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67e29811c4bb3ef81d02cc27f6bf28ddf6106e566834171bb426761fb53cc86,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727089370610986066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30b891529fae87ccf46fe1be63109903c0ea3801959e8b4bdfdab925e03572,PodSandboxId:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727089356210009170,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188,PodSandboxId:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727089341149197922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727089340983866611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727089341102746714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298,PodSandboxId:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340798092705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff,PodSandboxId:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727089340738944937,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162,PodSandboxId:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340818143099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b,PodSandboxId:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727089340637022976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f,PodSandboxId:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727089340594152883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727088889397828563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740832979273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740768781664,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727088728991879207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727088728409335220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727088716269218919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727088716121003401,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30b1e6fe-5ad8-43d1-a2f1-694c85d3c0ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.690065087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2d12a71-bbd9-4d2b-8e00-420d2ba3e4fe name=/runtime.v1.RuntimeService/Version
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.690137531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2d12a71-bbd9-4d2b-8e00-420d2ba3e4fe name=/runtime.v1.RuntimeService/Version
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.691154466Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b0c9c38-3e2f-4cf7-9704-5366d84cd36c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.691962744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089696691935225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b0c9c38-3e2f-4cf7-9704-5366d84cd36c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.692506553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e353045-c724-41f8-b186-6de75a32496a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.692562153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e353045-c724-41f8-b186-6de75a32496a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.693063174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6edcfd8c7545c358843c96279ada162fc72dd4515d923bc5a16369f83c1a90ae,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727089424616591945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727089395611430583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727089387611528949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53890ceb98ce449571ef64a867719928aa3508176841eeeeca6f51b9e26af6ba,PodSandboxId:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727089373930776339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67e29811c4bb3ef81d02cc27f6bf28ddf6106e566834171bb426761fb53cc86,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727089370610986066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30b891529fae87ccf46fe1be63109903c0ea3801959e8b4bdfdab925e03572,PodSandboxId:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727089356210009170,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188,PodSandboxId:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727089341149197922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727089340983866611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727089341102746714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298,PodSandboxId:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340798092705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff,PodSandboxId:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727089340738944937,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162,PodSandboxId:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340818143099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b,PodSandboxId:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727089340637022976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f,PodSandboxId:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727089340594152883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727088889397828563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740832979273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740768781664,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727088728991879207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727088728409335220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727088716269218919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727088716121003401,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e353045-c724-41f8-b186-6de75a32496a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.752430012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a703255-65f7-49e6-a7d9-dc8559d22008 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.752529155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a703255-65f7-49e6-a7d9-dc8559d22008 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.753815607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d6a00ff-cffe-4e8c-81b4-5e3d542c582d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.754454783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089696754428797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d6a00ff-cffe-4e8c-81b4-5e3d542c582d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.754991359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3759a094-6a26-4ec2-a4e8-1944fc061420 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.755094518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3759a094-6a26-4ec2-a4e8-1944fc061420 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:08:16 ha-790780 crio[3637]: time="2024-09-23 11:08:16.755638557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6edcfd8c7545c358843c96279ada162fc72dd4515d923bc5a16369f83c1a90ae,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727089424616591945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727089395611430583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727089387611528949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53890ceb98ce449571ef64a867719928aa3508176841eeeeca6f51b9e26af6ba,PodSandboxId:891de0cca34eeff51c3dcf5feda2b987bb49a0131c921c4a688f25147da1197e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727089373930776339,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d67e29811c4bb3ef81d02cc27f6bf28ddf6106e566834171bb426761fb53cc86,PodSandboxId:64c1265acf6cd96480e262cd246df3d26498e88fee4ac50eca06105972758215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727089370610986066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd672c2c-1784-44f0-adc7-e5184ddc96f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30b891529fae87ccf46fe1be63109903c0ea3801959e8b4bdfdab925e03572,PodSandboxId:9f837719992a224e1b32ac16825cbbf4d9b040cbd8bfbb826cab6552bacc734e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727089356210009170,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67aed14e0871ee4d58ebb398bf32d9f6,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188,PodSandboxId:73e02d5cfff7ffb895baecda2b96134ac406b2e3ecf3d65d0219d3f47cdc2b05,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727089341149197922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55,PodSandboxId:8775ed754ced90af58a5b70b360151c002b68f6930b9721a7152771e96e8a927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727089340983866611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255812681d1a0e612e49bf2f9931ab5b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212,PodSandboxId:c81c26604c94a31759054a64b2361d320b2b39232168fca0ec7a6fd1af16e709,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727089341102746714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a50d5f74643d055dd7bcfbab1dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298,PodSandboxId:3865d2a32b68d647baba43baf02dd84e197b6c900fc807e30d3c342d63e0e4d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340798092705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff,PodSandboxId:ca9f662374b7c02005133c3cf45d984b8a574aab116e3da1649e67c9e974506f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727089340738944937,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162,PodSandboxId:b2dc0ade55a88901829c8c5e8c298baff8c9bf212fd1ed34c0c8d3a9f0058cc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727089340818143099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b,PodSandboxId:3f1f06e5066e4ba20022ffa6baf8e6a694c337bf2a8a044665d338980ab344b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727089340637022976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f,PodSandboxId:3bb84cae3317cff9acc1b4f73791cf91d9b960f08ff9a4c5297032f3a40ddfd2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727089340594152883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b6cdb320cb1265d915b7a62cf818b372757584c27bdd091cecb8f096bc038c0,PodSandboxId:64b2fb317bf54169f45ece7f04015b36facacfcce1485cc3cfbb1474b7333163,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727088889397828563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hmsb2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e067811-dad7-4eae-8f9f-24b6d134c3be,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc,PodSandboxId:7f70accb19994c05b5acb7a1f191d3d1fa1d1be601dc274f9e12fccfaa639149,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740832979273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vzhrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 730f9509-94d1-4b3f-b45e-bee6f2386d31,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927,PodSandboxId:61e4d18ef53ff868783a77e40ba43cdac33104a0566a4bb6c75dd071e75948c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727088740768781664,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bsbth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d308ec2-ea22-47f7-966c-9b0a4410c764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5,PodSandboxId:12e4b7f57870593d62196faf68952169aa273ec0f91d25c2a29248e1e0aba624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727088728991879207,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60edcb9-c4a2-4116-b316-cc7777aa054f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9,PodSandboxId:a1aa2ae427e365c51f44e5b0d58bdb6278d96d0f63eba3256225704a0654d7ec,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727088728409335220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5d9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6249eb-6de3-413a-8acf-3804fd05badb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e,PodSandboxId:d632e3d4755d2a4a75e5426032d56440696636f90ff4009781d69cc7822b243d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727088716269218919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61ebdcec6eabb6584f7929ac2d99660f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989,PodSandboxId:cf20e920bbbdf29c1ba90a775b7815b8acaf957668b4a7f5492acc8648a5af8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1727088716121003401,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-790780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15d010bbb48c46b1437d3cf7cda623bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3759a094-6a26-4ec2-a4e8-1944fc061420 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6edcfd8c7545c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   64c1265acf6cd       storage-provisioner
	86013bc9367e8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Running             kube-controller-manager   2                   8775ed754ced9       kube-controller-manager-ha-790780
	5d360ab7dc7cc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Running             kube-apiserver            3                   c81c26604c94a       kube-apiserver-ha-790780
	53890ceb98ce4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   891de0cca34ee       busybox-7dff88458-hmsb2
	d67e29811c4bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   64c1265acf6cd       storage-provisioner
	6d30b891529fa       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   9f837719992a2       kube-vip-ha-790780
	13561286caf9b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   73e02d5cfff7f       kube-proxy-jqwtw
	f8850e49700ea       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   c81c26604c94a       kube-apiserver-ha-790780
	d656a4217f330       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   8775ed754ced9       kube-controller-manager-ha-790780
	f10b6c5729682       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   b2dc0ade55a88       coredns-7c65d6cfc9-bsbth
	4d39426c985ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   3865d2a32b68d       coredns-7c65d6cfc9-vzhrs
	75a0284bb89db       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   ca9f662374b7c       kindnet-5d9ww
	83ecacf23cf80       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   3f1f06e5066e4       kube-scheduler-ha-790780
	b663dbbec0498       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   3bb84cae3317c       etcd-ha-790780
	7b6cdb320cb12       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   64b2fb317bf54       busybox-7dff88458-hmsb2
	fceea5af30884       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   7f70accb19994       coredns-7c65d6cfc9-vzhrs
	8f008021913ac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   61e4d18ef53ff       coredns-7c65d6cfc9-bsbth
	20dea9bfd7b93       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      16 minutes ago      Exited              kube-proxy                0                   12e4b7f578705       kube-proxy-jqwtw
	70e8cba43f15f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      16 minutes ago      Exited              kindnet-cni               0                   a1aa2ae427e36       kindnet-5d9ww
	579e069dd212e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   d632e3d4755d2       kube-scheduler-ha-790780
	621532bf94f06       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   cf20e920bbbdf       etcd-ha-790780
	
	
	==> coredns [4d39426c985ca93358b5c5c73bd6c95abf089e20246479f1d9eacd056d92f298] <==
	Trace[630445729]: [10.00145338s] [10.00145338s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1315314946]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:02:30.410) (total time: 10001ms):
	Trace[1315314946]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:02:40.412)
	Trace[1315314946]: [10.001380413s] [10.001380413s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8f008021913acabeed574c5a3a355c49586bf15caf7c65cc240e710ae21ca927] <==
	[INFO] 10.244.2.2:50254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000209403s
	[INFO] 10.244.1.2:48243 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198306s
	[INFO] 10.244.1.2:39091 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230366s
	[INFO] 10.244.1.2:49543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199975s
	[INFO] 10.244.0.4:45173 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102778s
	[INFO] 10.244.0.4:32836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736533s
	[INFO] 10.244.0.4:44659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129519s
	[INFO] 10.244.0.4:54433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098668s
	[INFO] 10.244.0.4:37772 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.2.2:43894 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134793s
	[INFO] 10.244.2.2:34604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147389s
	[INFO] 10.244.1.2:53532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242838s
	[INFO] 10.244.1.2:45804 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159901s
	[INFO] 10.244.1.2:39298 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112738s
	[INFO] 10.244.0.4:43692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093071s
	[INFO] 10.244.0.4:51414 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096722s
	[INFO] 10.244.2.2:56355 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295938s
	[INFO] 10.244.1.2:59520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142399s
	[INFO] 10.244.0.4:55347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090911s
	[INFO] 10.244.0.4:53926 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114353s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1792&timeout=6m54s&timeoutSeconds=414&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10b6c57296821c98363dc29ec11dfee9310b2c6084037849827046c5b208162] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fceea5af308846c3db7318acccd5bf560fffab2ee9ad240c571e287f247354cc] <==
	[INFO] 10.244.2.2:60029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181162s
	[INFO] 10.244.2.2:38618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184142s
	[INFO] 10.244.1.2:46063 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001758433s
	[INFO] 10.244.1.2:60295 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001402726s
	[INFO] 10.244.1.2:38240 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160236s
	[INFO] 10.244.1.2:41977 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113581s
	[INFO] 10.244.1.2:44892 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133741s
	[INFO] 10.244.0.4:47708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105848s
	[INFO] 10.244.0.4:58776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144697s
	[INFO] 10.244.0.4:33311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001202009s
	[INFO] 10.244.2.2:57039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019058s
	[INFO] 10.244.2.2:57127 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153386s
	[INFO] 10.244.1.2:52843 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168874s
	[INFO] 10.244.0.4:40890 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014121s
	[INFO] 10.244.0.4:38864 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079009s
	[INFO] 10.244.2.2:47502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158927s
	[INFO] 10.244.2.2:57106 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185408s
	[INFO] 10.244.2.2:34447 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139026s
	[INFO] 10.244.1.2:59976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015634s
	[INFO] 10.244.1.2:53446 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000288738s
	[INFO] 10.244.1.2:52114 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166821s
	[INFO] 10.244.0.4:54732 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099319s
	[INFO] 10.244.0.4:49290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071388s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-790780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_52_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:08:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:08:14 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:08:14 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:08:14 +0000   Mon, 23 Sep 2024 10:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:08:14 +0000   Mon, 23 Sep 2024 10:52:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    ha-790780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4137f4910e0940f183cebcb2073b69b7
	  System UUID:                4137f491-0e09-40f1-83ce-bcb2073b69b7
	  Boot ID:                    d20b206f-6d12-4950-af76-836822976902
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmsb2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-bsbth             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-vzhrs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-790780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-5d9ww                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-790780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-790780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-jqwtw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-790780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-790780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m10s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-790780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-790780 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-790780 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-790780 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Warning  ContainerGCFailed        6m15s (x2 over 7m15s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             6m2s (x3 over 6m51s)   kubelet          Node ha-790780 status is now: NodeNotReady
	  Normal   RegisteredNode           5m13s                  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal   RegisteredNode           5m                     node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-790780 event: Registered Node ha-790780 in Controller
	
	
	Name:               ha-790780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_53_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:52:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:08:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:03:53 +0000   Mon, 23 Sep 2024 11:03:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:03:53 +0000   Mon, 23 Sep 2024 11:03:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:03:53 +0000   Mon, 23 Sep 2024 11:03:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:03:53 +0000   Mon, 23 Sep 2024 11:03:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-790780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87f6f3c7af44480934336376709a0c8
	  System UUID:                f87f6f3c-7af4-4480-9343-36376709a0c8
	  Boot ID:                    529d95b4-82c4-431d-ac12-76b1a8542c33
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hdk9n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-790780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-x2v9d                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-790780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-790780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-x8fb6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-790780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-790780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m2s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-790780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-790780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-790780-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-790780-m02 status is now: NodeNotReady
	  Normal  Starting                 5m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node ha-790780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node ha-790780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           5m                     node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-790780-m02 event: Registered Node ha-790780-m02 in Controller
	
	
	Name:               ha-790780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-790780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=ha-790780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T10_55_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:55:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-790780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:05:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 11:05:29 +0000   Mon, 23 Sep 2024 11:06:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 11:05:29 +0000   Mon, 23 Sep 2024 11:06:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 11:05:29 +0000   Mon, 23 Sep 2024 11:06:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 11:05:29 +0000   Mon, 23 Sep 2024 11:06:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    ha-790780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8bb8bb71d764d5397c864a970ca06f0
	  System UUID:                a8bb8bb7-1d76-4d53-97c8-64a970ca06f0
	  Boot ID:                    f9dcb2f1-92f9-4730-90bf-ce863aaad94d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fm44c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-sz6cc              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-58k4g           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-790780-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-790780-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-790780-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-790780-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m13s                  node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   RegisteredNode           5m                     node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-790780-m04 event: Registered Node ha-790780-m04 in Controller
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-790780-m04 has been rebooted, boot id: f9dcb2f1-92f9-4730-90bf-ce863aaad94d
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-790780-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-790780-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m48s                  kubelet          Node ha-790780-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m48s                  kubelet          Node ha-790780-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 4m32s)   node-controller  Node ha-790780-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +4.609594] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.519719] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.055679] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057192] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.186843] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.114356] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.269409] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.949380] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.106869] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.060266] kauditd_printk_skb: 158 callbacks suppressed
	[Sep23 10:52] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.081963] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.787202] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.501695] kauditd_printk_skb: 41 callbacks suppressed
	[Sep23 10:53] kauditd_printk_skb: 26 callbacks suppressed
	[Sep23 11:02] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.147061] systemd-fstab-generator[3573]: Ignoring "noauto" option for root device
	[  +0.186029] systemd-fstab-generator[3587]: Ignoring "noauto" option for root device
	[  +0.154246] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +0.296440] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.828744] systemd-fstab-generator[3768]: Ignoring "noauto" option for root device
	[ +16.115528] kauditd_printk_skb: 218 callbacks suppressed
	[Sep23 11:03] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [621532bf94f06bf30a97a7d00a8fc2dd1cc9e3b040b04e10ffcd611b75e3d989] <==
	{"level":"warn","ts":"2024-09-23T11:00:47.074851Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T11:00:39.294351Z","time spent":"7.780488095s","remote":"127.0.0.1:44930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	2024/09/23 11:00:47 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-23T11:00:47.201462Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.234:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:00:47.201526Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.234:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T11:00:47.201602Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"de9917ec5c740094","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-23T11:00:47.201815Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.201853Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.201879Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202040Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202123Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202217Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202246Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64eeb36cde65c3cc"}
	{"level":"info","ts":"2024-09-23T11:00:47.202253Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202263Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202301Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202337Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202416Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202462Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.202491Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:00:47.205777Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"warn","ts":"2024-09-23T11:00:47.205868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.949231322s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-23T11:00:47.205890Z","caller":"traceutil/trace.go:171","msg":"trace[218702658] range","detail":"{range_begin:; range_end:; }","duration":"8.949266718s","start":"2024-09-23T11:00:38.256616Z","end":"2024-09-23T11:00:47.205883Z","steps":["trace[218702658] 'agreement among raft nodes before linearized reading'  (duration: 8.949230156s)"],"step_count":1}
	{"level":"error","ts":"2024-09-23T11:00:47.205956Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-23T11:00:47.206023Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2024-09-23T11:00:47.206762Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-790780","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.234:2380"],"advertise-client-urls":["https://192.168.39.234:2379"]}
	
	
	==> etcd [b663dbbec0498e478e69610972fb673a40b3b220c6768345364f3cfc1904731f] <==
	{"level":"info","ts":"2024-09-23T11:04:53.863430Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:04:53.868960Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"de9917ec5c740094","to":"147b37cffd14ab5b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-23T11:04:53.869014Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:04:53.878136Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"de9917ec5c740094","to":"147b37cffd14ab5b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-23T11:04:53.878183Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"warn","ts":"2024-09-23T11:04:55.480233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.232074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T11:04:55.480311Z","caller":"traceutil/trace.go:171","msg":"trace[654632820] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:2396; }","duration":"118.336336ms","start":"2024-09-23T11:04:55.361960Z","end":"2024-09-23T11:04:55.480296Z","steps":["trace[654632820] 'count revisions from in-memory index tree'  (duration: 117.372201ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:05:43.429973Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.128:56482","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-23T11:05:43.442071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 switched to configuration voters=(7272947728418980812 16039877851787559060)"}
	{"level":"info","ts":"2024-09-23T11:05:43.444126Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","removed-remote-peer-id":"147b37cffd14ab5b","removed-remote-peer-urls":["https://192.168.39.128:2380"]}
	{"level":"info","ts":"2024-09-23T11:05:43.444400Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"warn","ts":"2024-09-23T11:05:43.445310Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:05:43.445348Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"warn","ts":"2024-09-23T11:05:43.445972Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:05:43.446050Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:05:43.446178Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"warn","ts":"2024-09-23T11:05:43.447170Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b","error":"context canceled"}
	{"level":"warn","ts":"2024-09-23T11:05:43.447275Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"147b37cffd14ab5b","error":"failed to read 147b37cffd14ab5b on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-23T11:05:43.447318Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"warn","ts":"2024-09-23T11:05:43.447577Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b","error":"context canceled"}
	{"level":"info","ts":"2024-09-23T11:05:43.447625Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"de9917ec5c740094","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:05:43.447647Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"147b37cffd14ab5b"}
	{"level":"info","ts":"2024-09-23T11:05:43.447663Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"de9917ec5c740094","removed-remote-peer-id":"147b37cffd14ab5b"}
	{"level":"warn","ts":"2024-09-23T11:05:43.456962Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"de9917ec5c740094","remote-peer-id-stream-handler":"de9917ec5c740094","remote-peer-id-from":"147b37cffd14ab5b"}
	{"level":"warn","ts":"2024-09-23T11:05:43.464408Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.128:45784","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:08:17 up 16 min,  0 users,  load average: 0.49, 0.41, 0.29
	Linux ha-790780 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [70e8cba43f15fed299647b0b13ec923e204337e706cc566a4ab749c738ce74c9] <==
	I0923 11:00:19.674787       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:00:19.674841       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:00:19.674977       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:00:19.674984       1 main.go:299] handling current node
	I0923 11:00:19.674995       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:00:19.674999       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:00:19.675057       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:00:19.675082       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:00:29.676582       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:00:29.676657       1 main.go:299] handling current node
	I0923 11:00:29.676676       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:00:29.676695       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:00:29.676852       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:00:29.676877       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:00:29.677003       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:00:29.677041       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:00:39.675498       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:00:39.675549       1 main.go:299] handling current node
	I0923 11:00:39.675580       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:00:39.675589       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:00:39.675745       1 main.go:295] Handling node with IPs: map[192.168.39.128:{}]
	I0923 11:00:39.675769       1 main.go:322] Node ha-790780-m03 has CIDR [10.244.2.0/24] 
	I0923 11:00:39.675838       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:00:39.675867       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	E0923 11:00:45.279537       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [75a0284bb89db9496bb6030c8d727d87898f850f7fb77fc4c2bce973537355ff] <==
	I0923 11:07:32.268908       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:07:42.275472       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:07:42.275581       1 main.go:299] handling current node
	I0923 11:07:42.275609       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:07:42.275627       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:07:42.275837       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:07:42.275989       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:07:52.267797       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:07:52.267842       1 main.go:299] handling current node
	I0923 11:07:52.267857       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:07:52.267862       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:07:52.267987       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:07:52.268010       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:08:02.269044       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:08:02.269181       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:08:02.269447       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:08:02.269486       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	I0923 11:08:02.269564       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:08:02.269584       1 main.go:299] handling current node
	I0923 11:08:12.267982       1 main.go:295] Handling node with IPs: map[192.168.39.234:{}]
	I0923 11:08:12.268352       1 main.go:299] handling current node
	I0923 11:08:12.268471       1 main.go:295] Handling node with IPs: map[192.168.39.43:{}]
	I0923 11:08:12.268500       1 main.go:322] Node ha-790780-m02 has CIDR [10.244.1.0/24] 
	I0923 11:08:12.268660       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0923 11:08:12.268682       1 main.go:322] Node ha-790780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5d360ab7dc7cc2d53bb3b9f931dd24b9a3e1e07d3e3301017458d3c082c017a6] <==
	I0923 11:03:09.763590       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0923 11:03:09.837729       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:03:09.837820       1 policy_source.go:224] refreshing policies
	I0923 11:03:09.841235       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:03:09.843472       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 11:03:09.849884       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:03:09.857632       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 11:03:09.859629       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 11:03:09.863421       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 11:03:09.863768       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 11:03:09.863848       1 aggregator.go:171] initial CRD sync complete...
	I0923 11:03:09.863885       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 11:03:09.863908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 11:03:09.863932       1 cache.go:39] Caches are synced for autoregister controller
	I0923 11:03:09.865006       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 11:03:09.865059       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 11:03:09.865717       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0923 11:03:09.870693       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.43]
	I0923 11:03:09.872889       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 11:03:09.880892       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0923 11:03:09.887458       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0923 11:03:09.924504       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 11:03:10.760941       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0923 11:03:11.301718       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.234 192.168.39.43]
	W0923 11:03:21.434866       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.234 192.168.39.43]
	
	
	==> kube-apiserver [f8850e49700ea88a33dd0ae8adcff9b8d5a3e6e51c343e0c316390eb9bd02212] <==
	I0923 11:02:21.860015       1 options.go:228] external host was not specified, using 192.168.39.234
	I0923 11:02:21.864300       1 server.go:142] Version: v1.31.1
	I0923 11:02:21.864892       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:02:22.747941       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0923 11:02:22.755308       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:02:22.760310       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0923 11:02:22.760445       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0923 11:02:22.760795       1 instance.go:232] Using reconciler: lease
	W0923 11:02:42.744237       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0923 11:02:42.747215       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0923 11:02:42.761929       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [86013bc9367e8ce480009beb83ffb68aba1f382590f3a8525581f2fb2694893e] <==
	I0923 11:06:30.299324       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:06:30.362425       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.292296ms"
	I0923 11:06:30.362761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.704µs"
	I0923 11:06:33.007654       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	I0923 11:06:35.447999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780-m04"
	E0923 11:06:37.999449       1 gc_controller.go:151] "Failed to get node" err="node \"ha-790780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-790780-m03"
	E0923 11:06:37.999587       1 gc_controller.go:151] "Failed to get node" err="node \"ha-790780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-790780-m03"
	E0923 11:06:37.999627       1 gc_controller.go:151] "Failed to get node" err="node \"ha-790780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-790780-m03"
	E0923 11:06:37.999653       1 gc_controller.go:151] "Failed to get node" err="node \"ha-790780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-790780-m03"
	E0923 11:06:37.999678       1 gc_controller.go:151] "Failed to get node" err="node \"ha-790780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-790780-m03"
	I0923 11:06:38.012772       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-790780-m03"
	I0923 11:06:38.054126       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-790780-m03"
	I0923 11:06:38.054427       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-790780-m03"
	I0923 11:06:38.089636       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-790780-m03"
	I0923 11:06:38.089938       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rqjzc"
	I0923 11:06:38.128206       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rqjzc"
	I0923 11:06:38.129388       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-790780-m03"
	I0923 11:06:38.162244       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-790780-m03"
	I0923 11:06:38.162348       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-790780-m03"
	I0923 11:06:38.188505       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-790780-m03"
	I0923 11:06:38.188597       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lzbx6"
	I0923 11:06:38.215898       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lzbx6"
	I0923 11:06:38.215942       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-790780-m03"
	I0923 11:06:38.244272       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-790780-m03"
	I0923 11:08:14.639595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-790780"
	
	
	==> kube-controller-manager [d656a4217f330be6b6260c7cf80c7542853c6dff421a1641ab9340de90c02b55] <==
	I0923 11:02:22.320684       1 serving.go:386] Generated self-signed cert in-memory
	I0923 11:02:22.796463       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0923 11:02:22.796506       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:02:22.798067       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0923 11:02:22.798565       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 11:02:22.798823       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 11:02:22.798897       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0923 11:02:43.767930       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.234:8443/healthz\": dial tcp 192.168.39.234:8443: connect: connection refused"
	
	
	==> kube-proxy [13561286caf9b71f405a4c9ee6df9e63bff33cb2e4283e2916cec2958ffb5188] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:02:23.812815       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:02:26.885053       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:02:29.957452       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:02:36.102598       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:02:45.316739       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0923 11:03:06.820896       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-790780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0923 11:03:06.821059       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0923 11:03:06.821183       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:03:06.860844       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:03:06.860993       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:03:06.861070       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:03:06.863827       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:03:06.864302       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:03:06.864433       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:03:06.866605       1 config.go:199] "Starting service config controller"
	I0923 11:03:06.866679       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:03:06.866744       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:03:06.866773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:03:06.868115       1 config.go:328] "Starting node config controller"
	I0923 11:03:06.868159       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:03:09.167603       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:03:09.167707       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:03:09.168508       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [20dea9bfd7b934f52377190cf2f8cf97975023f6abc4e095bb50519d019f6fb5] <==
	E0923 10:59:34.854183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:34.854273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:34.854427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:37.924543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:37.924828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:41.000193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:41.000329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:44.068818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:44.068886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:44.069038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:44.069102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:53.284695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759": dial tcp 192.168.39.254:8443: connect: no route to host
	W0923 10:59:53.285283       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:53.285545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0923 10:59:53.285060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 10:59:56.355932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 10:59:56.356010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 11:00:08.644459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 11:00:08.644675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 11:00:14.788258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 11:00:14.788343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-790780&resourceVersion=1776\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 11:00:20.932658       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 11:00:20.933001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0923 11:00:42.436271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759": dial tcp 192.168.39.254:8443: connect: no route to host
	E0923 11:00:42.436501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1759\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [579e069dd212e4a9071e2532ef1cbcd004d1f5add3d8a9179689208e31477a9e] <==
	E0923 10:55:25.178321       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-58k4g\": pod kube-proxy-58k4g is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-58k4g"
	E0923 10:55:25.223677       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.224053       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 143d16c9-72ab-4693-86a9-227280e3d88b(kube-system/kindnet-rhmrv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rhmrv"
	E0923 10:55:25.224238       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rhmrv\": pod kindnet-rhmrv is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-rhmrv"
	I0923 10:55:25.224407       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rhmrv" node="ha-790780-m04"
	E0923 10:55:25.257675       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.257807       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 20bf7e97-ed43-402a-b267-4c1d2f4b5bbf(kube-system/kindnet-sz6cc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sz6cc"
	E0923 10:55:25.257863       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sz6cc\": pod kindnet-sz6cc is already assigned to node \"ha-790780-m04\"" pod="kube-system/kindnet-sz6cc"
	I0923 10:55:25.257906       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sz6cc" node="ha-790780-m04"
	E0923 10:55:25.260301       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	E0923 10:55:25.260462       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e6f2d4b5-c6d7-4f34-b81a-2644640ae3bb(kube-system/kube-proxy-ghvw7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvw7"
	E0923 10:55:25.260529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvw7\": pod kube-proxy-ghvw7 is already assigned to node \"ha-790780-m04\"" pod="kube-system/kube-proxy-ghvw7"
	I0923 10:55:25.260575       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvw7" node="ha-790780-m04"
	E0923 11:00:38.412750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0923 11:00:40.170615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0923 11:00:41.294007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0923 11:00:41.338093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0923 11:00:42.348445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0923 11:00:42.606563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0923 11:00:42.762867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0923 11:00:44.038706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0923 11:00:44.725643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0923 11:00:45.118576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0923 11:00:46.454627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0923 11:00:47.034899       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [83ecacf23cf8024a10d414b9524f1e3209d24811e6a4592c5129e114fd96fb7b] <==
	W0923 11:02:59.272232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.234:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:02:59.272273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.234:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:00.149262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.234:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:00.149480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.234:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:00.442834       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.234:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:00.442964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.234:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:01.100879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.234:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:01.101052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.234:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:01.161167       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.234:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:01.161308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.234:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:01.696105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.234:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:01.696177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.234:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:02.177145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.234:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:02.177253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.234:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:02.375884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.234:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:02.376034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.234:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:02.482342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.234:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:02.482552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.234:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:03.582725       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.234:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:03.582859       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.234:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:04.164305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.234:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:04.164452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.234:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:05.353441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.234:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.234:8443: connect: connection refused
	E0923 11:03:05.353506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.234:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.234:8443: connect: connection refused" logger="UnhandledError"
	I0923 11:03:20.675908       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:07:02 ha-790780 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 11:07:02 ha-790780 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 11:07:02 ha-790780 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 11:07:02 ha-790780 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 11:07:02 ha-790780 kubelet[1310]: E0923 11:07:02.896754    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089622895913321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:02 ha-790780 kubelet[1310]: E0923 11:07:02.896805    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089622895913321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:12 ha-790780 kubelet[1310]: E0923 11:07:12.898574    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089632898076077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:12 ha-790780 kubelet[1310]: E0923 11:07:12.898613    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089632898076077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:22 ha-790780 kubelet[1310]: E0923 11:07:22.900133    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089642899440139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:22 ha-790780 kubelet[1310]: E0923 11:07:22.900183    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089642899440139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:32 ha-790780 kubelet[1310]: E0923 11:07:32.901221    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089652901010100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:32 ha-790780 kubelet[1310]: E0923 11:07:32.901243    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089652901010100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:42 ha-790780 kubelet[1310]: E0923 11:07:42.903880    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089662903280360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:42 ha-790780 kubelet[1310]: E0923 11:07:42.903921    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089662903280360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:52 ha-790780 kubelet[1310]: E0923 11:07:52.907164    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089672906249217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:07:52 ha-790780 kubelet[1310]: E0923 11:07:52.907869    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089672906249217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:08:02 ha-790780 kubelet[1310]: E0923 11:08:02.632157    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 11:08:02 ha-790780 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 11:08:02 ha-790780 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 11:08:02 ha-790780 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 11:08:02 ha-790780 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 11:08:02 ha-790780 kubelet[1310]: E0923 11:08:02.909699    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089682909307806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:08:02 ha-790780 kubelet[1310]: E0923 11:08:02.909745    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089682909307806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:08:12 ha-790780 kubelet[1310]: E0923 11:08:12.913950    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089692913165692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:08:12 ha-790780 kubelet[1310]: E0923 11:08:12.914815    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727089692913165692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:08:16.308642   33184 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19689-3961/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-790780 -n ha-790780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-790780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-399279
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-399279
E0923 11:24:15.431629   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-399279: exit status 82 (2m1.871032576s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-399279-m03"  ...
	* Stopping node "multinode-399279-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-399279" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-399279 --wait=true -v=8 --alsologtostderr
E0923 11:25:57.440698   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-399279 --wait=true -v=8 --alsologtostderr: (3m23.323452293s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-399279
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-399279 -n multinode-399279
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-399279 logs -n 25: (1.463944738s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m02:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2040024565/001/cp-test_multinode-399279-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m02:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279:/home/docker/cp-test_multinode-399279-m02_multinode-399279.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279 sudo cat                                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m02_multinode-399279.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m02:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03:/home/docker/cp-test_multinode-399279-m02_multinode-399279-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279-m03 sudo cat                                   | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m02_multinode-399279-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp testdata/cp-test.txt                                                | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2040024565/001/cp-test_multinode-399279-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279:/home/docker/cp-test_multinode-399279-m03_multinode-399279.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279 sudo cat                                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m03_multinode-399279.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02:/home/docker/cp-test_multinode-399279-m03_multinode-399279-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279-m02 sudo cat                                   | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m03_multinode-399279-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-399279 node stop m03                                                          | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	| node    | multinode-399279 node start                                                             | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-399279                                                                | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:23 UTC |                     |
	| stop    | -p multinode-399279                                                                     | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:23 UTC |                     |
	| start   | -p multinode-399279                                                                     | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-399279                                                                | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:25:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:25:17.643296   43161 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:25:17.643548   43161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:25:17.643558   43161 out.go:358] Setting ErrFile to fd 2...
	I0923 11:25:17.643562   43161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:25:17.643734   43161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:25:17.644257   43161 out.go:352] Setting JSON to false
	I0923 11:25:17.645140   43161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4061,"bootTime":1727086657,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:25:17.645235   43161 start.go:139] virtualization: kvm guest
	I0923 11:25:17.648201   43161 out.go:177] * [multinode-399279] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:25:17.649601   43161 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:25:17.649605   43161 notify.go:220] Checking for updates...
	I0923 11:25:17.651084   43161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:25:17.652560   43161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:25:17.653672   43161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:25:17.654822   43161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:25:17.656325   43161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:25:17.658150   43161 config.go:182] Loaded profile config "multinode-399279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:25:17.658283   43161 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:25:17.658954   43161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:25:17.659009   43161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:25:17.675358   43161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I0923 11:25:17.675823   43161 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:25:17.676357   43161 main.go:141] libmachine: Using API Version  1
	I0923 11:25:17.676378   43161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:25:17.676717   43161 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:25:17.676913   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:25:17.711972   43161 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 11:25:17.713147   43161 start.go:297] selected driver: kvm2
	I0923 11:25:17.713161   43161 start.go:901] validating driver "kvm2" against &{Name:multinode-399279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-399279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:25:17.713321   43161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:25:17.713776   43161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:25:17.713870   43161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:25:17.728386   43161 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:25:17.729063   43161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:25:17.729090   43161 cni.go:84] Creating CNI manager for ""
	I0923 11:25:17.729137   43161 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 11:25:17.729194   43161 start.go:340] cluster config:
	{Name:multinode-399279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-399279 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:25:17.729344   43161 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:25:17.731024   43161 out.go:177] * Starting "multinode-399279" primary control-plane node in "multinode-399279" cluster
	I0923 11:25:17.732078   43161 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:25:17.732120   43161 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 11:25:17.732127   43161 cache.go:56] Caching tarball of preloaded images
	I0923 11:25:17.732210   43161 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 11:25:17.732223   43161 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 11:25:17.732355   43161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/config.json ...
	I0923 11:25:17.732601   43161 start.go:360] acquireMachinesLock for multinode-399279: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:25:17.732646   43161 start.go:364] duration metric: took 25.789µs to acquireMachinesLock for "multinode-399279"
	I0923 11:25:17.732660   43161 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:25:17.732665   43161 fix.go:54] fixHost starting: 
	I0923 11:25:17.732918   43161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:25:17.732947   43161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:25:17.747109   43161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0923 11:25:17.747543   43161 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:25:17.748038   43161 main.go:141] libmachine: Using API Version  1
	I0923 11:25:17.748056   43161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:25:17.748378   43161 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:25:17.748616   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:25:17.748774   43161 main.go:141] libmachine: (multinode-399279) Calling .GetState
	I0923 11:25:17.750248   43161 fix.go:112] recreateIfNeeded on multinode-399279: state=Running err=<nil>
	W0923 11:25:17.750265   43161 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:25:17.752010   43161 out.go:177] * Updating the running kvm2 "multinode-399279" VM ...
	I0923 11:25:17.753106   43161 machine.go:93] provisionDockerMachine start ...
	I0923 11:25:17.753124   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:25:17.753297   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:17.755684   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.756070   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:17.756095   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.756173   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:17.756347   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.756517   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.756673   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:17.756824   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:25:17.757020   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:25:17.757033   43161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:25:17.866595   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-399279
	
	I0923 11:25:17.866629   43161 main.go:141] libmachine: (multinode-399279) Calling .GetMachineName
	I0923 11:25:17.866848   43161 buildroot.go:166] provisioning hostname "multinode-399279"
	I0923 11:25:17.866874   43161 main.go:141] libmachine: (multinode-399279) Calling .GetMachineName
	I0923 11:25:17.867056   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:17.870010   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.870433   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:17.870454   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.870638   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:17.870822   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.870965   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.871096   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:17.871276   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:25:17.871445   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:25:17.871459   43161 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-399279 && echo "multinode-399279" | sudo tee /etc/hostname
	I0923 11:25:17.994229   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-399279
	
	I0923 11:25:17.994287   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:17.996842   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.997303   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:17.997328   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.997515   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:17.997713   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.997862   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.997981   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:17.998139   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:25:17.998328   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:25:17.998344   43161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-399279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-399279/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-399279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:25:18.106312   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:25:18.106348   43161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 11:25:18.106377   43161 buildroot.go:174] setting up certificates
	I0923 11:25:18.106389   43161 provision.go:84] configureAuth start
	I0923 11:25:18.106397   43161 main.go:141] libmachine: (multinode-399279) Calling .GetMachineName
	I0923 11:25:18.106647   43161 main.go:141] libmachine: (multinode-399279) Calling .GetIP
	I0923 11:25:18.109144   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.109530   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:18.109556   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.109711   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:18.111747   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.112146   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:18.112167   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.112249   43161 provision.go:143] copyHostCerts
	I0923 11:25:18.112279   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:25:18.112312   43161 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 11:25:18.112326   43161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:25:18.112395   43161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 11:25:18.112490   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:25:18.112517   43161 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 11:25:18.112528   43161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:25:18.112570   43161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 11:25:18.112628   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:25:18.112645   43161 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 11:25:18.112651   43161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:25:18.112675   43161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 11:25:18.112721   43161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.multinode-399279 san=[127.0.0.1 192.168.39.71 localhost minikube multinode-399279]
	I0923 11:25:18.291323   43161 provision.go:177] copyRemoteCerts
	I0923 11:25:18.291391   43161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:25:18.291419   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:18.294125   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.294385   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:18.294405   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.294567   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:18.294728   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:18.294864   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:18.294966   43161 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:25:18.380924   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 11:25:18.380990   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:25:18.408331   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 11:25:18.408407   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 11:25:18.432961   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 11:25:18.433027   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:25:18.458442   43161 provision.go:87] duration metric: took 352.041262ms to configureAuth
	I0923 11:25:18.458466   43161 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:25:18.458663   43161 config.go:182] Loaded profile config "multinode-399279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:25:18.458731   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:18.461353   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.461710   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:18.461739   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.461928   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:18.462129   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:18.462310   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:18.462452   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:18.462635   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:25:18.462806   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:25:18.462821   43161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 11:26:49.262238   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 11:26:49.262268   43161 machine.go:96] duration metric: took 1m31.509149402s to provisionDockerMachine
	I0923 11:26:49.262281   43161 start.go:293] postStartSetup for "multinode-399279" (driver="kvm2")
	I0923 11:26:49.262292   43161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:26:49.262314   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.262672   43161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:26:49.262699   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:26:49.265711   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.266146   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.266174   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.266480   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:26:49.266694   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.266894   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:26:49.267070   43161 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:26:49.352798   43161 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:26:49.356978   43161 command_runner.go:130] > NAME=Buildroot
	I0923 11:26:49.357000   43161 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 11:26:49.357007   43161 command_runner.go:130] > ID=buildroot
	I0923 11:26:49.357015   43161 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 11:26:49.357023   43161 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 11:26:49.357057   43161 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:26:49.357072   43161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 11:26:49.357147   43161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 11:26:49.357227   43161 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 11:26:49.357236   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 11:26:49.357324   43161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:26:49.366730   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:26:49.390798   43161 start.go:296] duration metric: took 128.504928ms for postStartSetup
	I0923 11:26:49.390835   43161 fix.go:56] duration metric: took 1m31.658169753s for fixHost
	I0923 11:26:49.390854   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:26:49.393571   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.394016   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.394044   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.394199   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:26:49.394408   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.394606   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.394771   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:26:49.394936   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:26:49.395130   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:26:49.395141   43161 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:26:49.502407   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727090809.479251002
	
	I0923 11:26:49.502435   43161 fix.go:216] guest clock: 1727090809.479251002
	I0923 11:26:49.502442   43161 fix.go:229] Guest: 2024-09-23 11:26:49.479251002 +0000 UTC Remote: 2024-09-23 11:26:49.390839845 +0000 UTC m=+91.782828835 (delta=88.411157ms)
	I0923 11:26:49.502488   43161 fix.go:200] guest clock delta is within tolerance: 88.411157ms
	I0923 11:26:49.502494   43161 start.go:83] releasing machines lock for "multinode-399279", held for 1m31.769838702s
	I0923 11:26:49.502515   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.502752   43161 main.go:141] libmachine: (multinode-399279) Calling .GetIP
	I0923 11:26:49.505199   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.505554   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.505579   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.505777   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.506264   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.506462   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.506551   43161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:26:49.506598   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:26:49.506657   43161 ssh_runner.go:195] Run: cat /version.json
	I0923 11:26:49.506684   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:26:49.509049   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.509218   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.509420   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.509447   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.509598   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:26:49.509680   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.509704   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.509753   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.509866   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:26:49.509926   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:26:49.510009   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.510080   43161 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:26:49.510423   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:26:49.510551   43161 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:26:49.613916   43161 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 11:26:49.614128   43161 ssh_runner.go:195] Run: systemctl --version
	I0923 11:26:49.640952   43161 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0923 11:26:49.641605   43161 command_runner.go:130] > systemd 252 (252)
	I0923 11:26:49.641653   43161 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 11:26:49.641716   43161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 11:26:49.802111   43161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:26:49.810204   43161 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 11:26:49.810470   43161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:26:49.810542   43161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:26:49.820337   43161 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:26:49.820362   43161 start.go:495] detecting cgroup driver to use...
	I0923 11:26:49.820416   43161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:26:49.837883   43161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:26:49.852495   43161 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:26:49.852570   43161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:26:49.867616   43161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:26:49.882264   43161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:26:50.044463   43161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:26:50.198999   43161 docker.go:233] disabling docker service ...
	I0923 11:26:50.199075   43161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:26:50.216859   43161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:26:50.230927   43161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:26:50.371252   43161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:26:50.513133   43161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:26:50.527234   43161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:26:50.548272   43161 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0923 11:26:50.548690   43161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 11:26:50.548744   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.559778   43161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 11:26:50.559839   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.571347   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.583042   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.593824   43161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:26:50.604652   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.615471   43161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.626063   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.636655   43161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:26:50.646298   43161 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0923 11:26:50.646368   43161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:26:50.656018   43161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:26:50.794419   43161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 11:26:51.532100   43161 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 11:26:51.532167   43161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 11:26:51.537187   43161 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0923 11:26:51.537214   43161 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 11:26:51.537220   43161 command_runner.go:130] > Device: 0,22	Inode: 1314        Links: 1
	I0923 11:26:51.537227   43161 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 11:26:51.537234   43161 command_runner.go:130] > Access: 2024-09-23 11:26:51.430312044 +0000
	I0923 11:26:51.537242   43161 command_runner.go:130] > Modify: 2024-09-23 11:26:51.414311712 +0000
	I0923 11:26:51.537268   43161 command_runner.go:130] > Change: 2024-09-23 11:26:51.414311712 +0000
	I0923 11:26:51.537278   43161 command_runner.go:130] >  Birth: -
	I0923 11:26:51.537299   43161 start.go:563] Will wait 60s for crictl version
	I0923 11:26:51.537337   43161 ssh_runner.go:195] Run: which crictl
	I0923 11:26:51.541043   43161 command_runner.go:130] > /usr/bin/crictl
	I0923 11:26:51.541120   43161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:26:51.580357   43161 command_runner.go:130] > Version:  0.1.0
	I0923 11:26:51.580485   43161 command_runner.go:130] > RuntimeName:  cri-o
	I0923 11:26:51.580512   43161 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0923 11:26:51.580661   43161 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 11:26:51.581924   43161 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 11:26:51.581983   43161 ssh_runner.go:195] Run: crio --version
	I0923 11:26:51.610968   43161 command_runner.go:130] > crio version 1.29.1
	I0923 11:26:51.610989   43161 command_runner.go:130] > Version:        1.29.1
	I0923 11:26:51.610996   43161 command_runner.go:130] > GitCommit:      unknown
	I0923 11:26:51.611000   43161 command_runner.go:130] > GitCommitDate:  unknown
	I0923 11:26:51.611004   43161 command_runner.go:130] > GitTreeState:   clean
	I0923 11:26:51.611011   43161 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0923 11:26:51.611015   43161 command_runner.go:130] > GoVersion:      go1.21.6
	I0923 11:26:51.611019   43161 command_runner.go:130] > Compiler:       gc
	I0923 11:26:51.611023   43161 command_runner.go:130] > Platform:       linux/amd64
	I0923 11:26:51.611027   43161 command_runner.go:130] > Linkmode:       dynamic
	I0923 11:26:51.611031   43161 command_runner.go:130] > BuildTags:      
	I0923 11:26:51.611043   43161 command_runner.go:130] >   containers_image_ostree_stub
	I0923 11:26:51.611053   43161 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0923 11:26:51.611059   43161 command_runner.go:130] >   btrfs_noversion
	I0923 11:26:51.611069   43161 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0923 11:26:51.611075   43161 command_runner.go:130] >   libdm_no_deferred_remove
	I0923 11:26:51.611083   43161 command_runner.go:130] >   seccomp
	I0923 11:26:51.611091   43161 command_runner.go:130] > LDFlags:          unknown
	I0923 11:26:51.611101   43161 command_runner.go:130] > SeccompEnabled:   true
	I0923 11:26:51.611106   43161 command_runner.go:130] > AppArmorEnabled:  false
	I0923 11:26:51.611191   43161 ssh_runner.go:195] Run: crio --version
	I0923 11:26:51.640366   43161 command_runner.go:130] > crio version 1.29.1
	I0923 11:26:51.640392   43161 command_runner.go:130] > Version:        1.29.1
	I0923 11:26:51.640399   43161 command_runner.go:130] > GitCommit:      unknown
	I0923 11:26:51.640411   43161 command_runner.go:130] > GitCommitDate:  unknown
	I0923 11:26:51.640418   43161 command_runner.go:130] > GitTreeState:   clean
	I0923 11:26:51.640429   43161 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0923 11:26:51.640434   43161 command_runner.go:130] > GoVersion:      go1.21.6
	I0923 11:26:51.640438   43161 command_runner.go:130] > Compiler:       gc
	I0923 11:26:51.640443   43161 command_runner.go:130] > Platform:       linux/amd64
	I0923 11:26:51.640448   43161 command_runner.go:130] > Linkmode:       dynamic
	I0923 11:26:51.640453   43161 command_runner.go:130] > BuildTags:      
	I0923 11:26:51.640458   43161 command_runner.go:130] >   containers_image_ostree_stub
	I0923 11:26:51.640462   43161 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0923 11:26:51.640466   43161 command_runner.go:130] >   btrfs_noversion
	I0923 11:26:51.640471   43161 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0923 11:26:51.640475   43161 command_runner.go:130] >   libdm_no_deferred_remove
	I0923 11:26:51.640479   43161 command_runner.go:130] >   seccomp
	I0923 11:26:51.640484   43161 command_runner.go:130] > LDFlags:          unknown
	I0923 11:26:51.640488   43161 command_runner.go:130] > SeccompEnabled:   true
	I0923 11:26:51.640494   43161 command_runner.go:130] > AppArmorEnabled:  false
	I0923 11:26:51.642610   43161 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 11:26:51.643704   43161 main.go:141] libmachine: (multinode-399279) Calling .GetIP
	I0923 11:26:51.646202   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:51.646532   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:51.646559   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:51.646701   43161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 11:26:51.650751   43161 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0923 11:26:51.650979   43161 kubeadm.go:883] updating cluster {Name:multinode-399279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-399279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:26:51.651107   43161 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:26:51.651163   43161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:26:51.695073   43161 command_runner.go:130] > {
	I0923 11:26:51.695093   43161 command_runner.go:130] >   "images": [
	I0923 11:26:51.695100   43161 command_runner.go:130] >     {
	I0923 11:26:51.695107   43161 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0923 11:26:51.695112   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695118   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0923 11:26:51.695123   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695127   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695137   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0923 11:26:51.695144   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0923 11:26:51.695148   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695152   43161 command_runner.go:130] >       "size": "87190579",
	I0923 11:26:51.695156   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695160   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695165   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695175   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695179   43161 command_runner.go:130] >     },
	I0923 11:26:51.695182   43161 command_runner.go:130] >     {
	I0923 11:26:51.695190   43161 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0923 11:26:51.695196   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695205   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0923 11:26:51.695211   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695217   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695229   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0923 11:26:51.695246   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0923 11:26:51.695251   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695257   43161 command_runner.go:130] >       "size": "1363676",
	I0923 11:26:51.695263   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695274   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695281   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695286   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695294   43161 command_runner.go:130] >     },
	I0923 11:26:51.695299   43161 command_runner.go:130] >     {
	I0923 11:26:51.695310   43161 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0923 11:26:51.695319   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695330   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0923 11:26:51.695339   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695344   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695358   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0923 11:26:51.695373   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0923 11:26:51.695381   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695388   43161 command_runner.go:130] >       "size": "31470524",
	I0923 11:26:51.695396   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695400   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695404   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695407   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695413   43161 command_runner.go:130] >     },
	I0923 11:26:51.695418   43161 command_runner.go:130] >     {
	I0923 11:26:51.695426   43161 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0923 11:26:51.695432   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695437   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0923 11:26:51.695443   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695446   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695462   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0923 11:26:51.695478   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0923 11:26:51.695484   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695488   43161 command_runner.go:130] >       "size": "63273227",
	I0923 11:26:51.695494   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695499   43161 command_runner.go:130] >       "username": "nonroot",
	I0923 11:26:51.695505   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695509   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695515   43161 command_runner.go:130] >     },
	I0923 11:26:51.695519   43161 command_runner.go:130] >     {
	I0923 11:26:51.695528   43161 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0923 11:26:51.695536   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695543   43161 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0923 11:26:51.695547   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695552   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695559   43161 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0923 11:26:51.695567   43161 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0923 11:26:51.695571   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695577   43161 command_runner.go:130] >       "size": "149009664",
	I0923 11:26:51.695581   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.695585   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.695590   43161 command_runner.go:130] >       },
	I0923 11:26:51.695594   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695601   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695605   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695611   43161 command_runner.go:130] >     },
	I0923 11:26:51.695614   43161 command_runner.go:130] >     {
	I0923 11:26:51.695622   43161 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0923 11:26:51.695627   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695632   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0923 11:26:51.695637   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695641   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695650   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0923 11:26:51.695657   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0923 11:26:51.695663   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695667   43161 command_runner.go:130] >       "size": "95237600",
	I0923 11:26:51.695673   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.695677   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.695683   43161 command_runner.go:130] >       },
	I0923 11:26:51.695686   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695692   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695696   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695702   43161 command_runner.go:130] >     },
	I0923 11:26:51.695705   43161 command_runner.go:130] >     {
	I0923 11:26:51.695713   43161 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0923 11:26:51.695718   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695723   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0923 11:26:51.695729   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695732   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695739   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0923 11:26:51.695749   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0923 11:26:51.695754   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695759   43161 command_runner.go:130] >       "size": "89437508",
	I0923 11:26:51.695764   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.695768   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.695773   43161 command_runner.go:130] >       },
	I0923 11:26:51.695777   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695783   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695787   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695793   43161 command_runner.go:130] >     },
	I0923 11:26:51.695796   43161 command_runner.go:130] >     {
	I0923 11:26:51.695803   43161 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0923 11:26:51.695810   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695815   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0923 11:26:51.695820   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695824   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695839   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0923 11:26:51.695849   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0923 11:26:51.695854   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695858   43161 command_runner.go:130] >       "size": "92733849",
	I0923 11:26:51.695864   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695868   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695873   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695878   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695881   43161 command_runner.go:130] >     },
	I0923 11:26:51.695884   43161 command_runner.go:130] >     {
	I0923 11:26:51.695890   43161 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0923 11:26:51.695893   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695898   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0923 11:26:51.695901   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695904   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695911   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0923 11:26:51.695918   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0923 11:26:51.695921   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695925   43161 command_runner.go:130] >       "size": "68420934",
	I0923 11:26:51.695928   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.695932   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.695935   43161 command_runner.go:130] >       },
	I0923 11:26:51.695938   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695942   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695945   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695948   43161 command_runner.go:130] >     },
	I0923 11:26:51.695951   43161 command_runner.go:130] >     {
	I0923 11:26:51.695956   43161 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0923 11:26:51.695965   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695970   43161 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0923 11:26:51.695973   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695977   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695983   43161 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0923 11:26:51.695989   43161 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0923 11:26:51.695996   43161 command_runner.go:130] >       ],
	I0923 11:26:51.696000   43161 command_runner.go:130] >       "size": "742080",
	I0923 11:26:51.696006   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.696010   43161 command_runner.go:130] >         "value": "65535"
	I0923 11:26:51.696015   43161 command_runner.go:130] >       },
	I0923 11:26:51.696019   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.696026   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.696029   43161 command_runner.go:130] >       "pinned": true
	I0923 11:26:51.696035   43161 command_runner.go:130] >     }
	I0923 11:26:51.696038   43161 command_runner.go:130] >   ]
	I0923 11:26:51.696041   43161 command_runner.go:130] > }
	I0923 11:26:51.696197   43161 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:26:51.696208   43161 crio.go:433] Images already preloaded, skipping extraction
	I0923 11:26:51.696249   43161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:26:51.730298   43161 command_runner.go:130] > {
	I0923 11:26:51.730326   43161 command_runner.go:130] >   "images": [
	I0923 11:26:51.730332   43161 command_runner.go:130] >     {
	I0923 11:26:51.730345   43161 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0923 11:26:51.730352   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730361   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0923 11:26:51.730367   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730373   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730386   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0923 11:26:51.730401   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0923 11:26:51.730408   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730414   43161 command_runner.go:130] >       "size": "87190579",
	I0923 11:26:51.730421   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.730430   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.730443   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.730456   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.730463   43161 command_runner.go:130] >     },
	I0923 11:26:51.730469   43161 command_runner.go:130] >     {
	I0923 11:26:51.730478   43161 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0923 11:26:51.730488   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730497   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0923 11:26:51.730506   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730512   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730525   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0923 11:26:51.730535   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0923 11:26:51.730541   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730551   43161 command_runner.go:130] >       "size": "1363676",
	I0923 11:26:51.730558   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.730570   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.730579   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.730587   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.730592   43161 command_runner.go:130] >     },
	I0923 11:26:51.730596   43161 command_runner.go:130] >     {
	I0923 11:26:51.730602   43161 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0923 11:26:51.730608   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730614   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0923 11:26:51.730619   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730625   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730639   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0923 11:26:51.730654   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0923 11:26:51.730663   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730669   43161 command_runner.go:130] >       "size": "31470524",
	I0923 11:26:51.730677   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.730686   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.730695   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.730707   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.730717   43161 command_runner.go:130] >     },
	I0923 11:26:51.730723   43161 command_runner.go:130] >     {
	I0923 11:26:51.730735   43161 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0923 11:26:51.730745   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730754   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0923 11:26:51.730763   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730773   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730788   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0923 11:26:51.730806   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0923 11:26:51.730815   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730825   43161 command_runner.go:130] >       "size": "63273227",
	I0923 11:26:51.730833   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.730843   43161 command_runner.go:130] >       "username": "nonroot",
	I0923 11:26:51.730851   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.730856   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.730863   43161 command_runner.go:130] >     },
	I0923 11:26:51.730871   43161 command_runner.go:130] >     {
	I0923 11:26:51.730881   43161 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0923 11:26:51.730890   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730899   43161 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0923 11:26:51.730907   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730917   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730931   43161 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0923 11:26:51.730945   43161 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0923 11:26:51.730954   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730964   43161 command_runner.go:130] >       "size": "149009664",
	I0923 11:26:51.730973   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.730981   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.730988   43161 command_runner.go:130] >       },
	I0923 11:26:51.730991   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.730995   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731001   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731005   43161 command_runner.go:130] >     },
	I0923 11:26:51.731012   43161 command_runner.go:130] >     {
	I0923 11:26:51.731020   43161 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0923 11:26:51.731026   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731031   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0923 11:26:51.731036   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731040   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731049   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0923 11:26:51.731058   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0923 11:26:51.731063   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731067   43161 command_runner.go:130] >       "size": "95237600",
	I0923 11:26:51.731073   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.731077   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.731083   43161 command_runner.go:130] >       },
	I0923 11:26:51.731087   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731093   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731097   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731101   43161 command_runner.go:130] >     },
	I0923 11:26:51.731105   43161 command_runner.go:130] >     {
	I0923 11:26:51.731113   43161 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0923 11:26:51.731117   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731122   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0923 11:26:51.731125   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731129   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731139   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0923 11:26:51.731148   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0923 11:26:51.731153   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731157   43161 command_runner.go:130] >       "size": "89437508",
	I0923 11:26:51.731163   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.731167   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.731173   43161 command_runner.go:130] >       },
	I0923 11:26:51.731177   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731182   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731185   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731190   43161 command_runner.go:130] >     },
	I0923 11:26:51.731193   43161 command_runner.go:130] >     {
	I0923 11:26:51.731199   43161 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0923 11:26:51.731206   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731211   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0923 11:26:51.731216   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731220   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731233   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0923 11:26:51.731245   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0923 11:26:51.731249   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731253   43161 command_runner.go:130] >       "size": "92733849",
	I0923 11:26:51.731256   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.731259   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731263   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731266   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731269   43161 command_runner.go:130] >     },
	I0923 11:26:51.731272   43161 command_runner.go:130] >     {
	I0923 11:26:51.731278   43161 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0923 11:26:51.731282   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731286   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0923 11:26:51.731289   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731293   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731299   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0923 11:26:51.731306   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0923 11:26:51.731309   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731313   43161 command_runner.go:130] >       "size": "68420934",
	I0923 11:26:51.731317   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.731320   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.731324   43161 command_runner.go:130] >       },
	I0923 11:26:51.731327   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731330   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731334   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731337   43161 command_runner.go:130] >     },
	I0923 11:26:51.731342   43161 command_runner.go:130] >     {
	I0923 11:26:51.731348   43161 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0923 11:26:51.731352   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731356   43161 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0923 11:26:51.731359   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731362   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731369   43161 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0923 11:26:51.731410   43161 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0923 11:26:51.731421   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731426   43161 command_runner.go:130] >       "size": "742080",
	I0923 11:26:51.731429   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.731433   43161 command_runner.go:130] >         "value": "65535"
	I0923 11:26:51.731438   43161 command_runner.go:130] >       },
	I0923 11:26:51.731442   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731448   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731456   43161 command_runner.go:130] >       "pinned": true
	I0923 11:26:51.731462   43161 command_runner.go:130] >     }
	I0923 11:26:51.731465   43161 command_runner.go:130] >   ]
	I0923 11:26:51.731468   43161 command_runner.go:130] > }
	I0923 11:26:51.731584   43161 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:26:51.731594   43161 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:26:51.731601   43161 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.31.1 crio true true} ...
	I0923 11:26:51.731689   43161 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-399279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-399279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:26:51.731759   43161 ssh_runner.go:195] Run: crio config
	I0923 11:26:51.764296   43161 command_runner.go:130] ! time="2024-09-23 11:26:51.741091425Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0923 11:26:51.770652   43161 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0923 11:26:51.777424   43161 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0923 11:26:51.777458   43161 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0923 11:26:51.777469   43161 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0923 11:26:51.777474   43161 command_runner.go:130] > #
	I0923 11:26:51.777484   43161 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0923 11:26:51.777497   43161 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0923 11:26:51.777506   43161 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0923 11:26:51.777519   43161 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0923 11:26:51.777526   43161 command_runner.go:130] > # reload'.
	I0923 11:26:51.777537   43161 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0923 11:26:51.777551   43161 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0923 11:26:51.777561   43161 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0923 11:26:51.777571   43161 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0923 11:26:51.777580   43161 command_runner.go:130] > [crio]
	I0923 11:26:51.777589   43161 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0923 11:26:51.777600   43161 command_runner.go:130] > # containers images, in this directory.
	I0923 11:26:51.777607   43161 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0923 11:26:51.777625   43161 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0923 11:26:51.777633   43161 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0923 11:26:51.777644   43161 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0923 11:26:51.777652   43161 command_runner.go:130] > # imagestore = ""
	I0923 11:26:51.777661   43161 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0923 11:26:51.777670   43161 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0923 11:26:51.777680   43161 command_runner.go:130] > storage_driver = "overlay"
	I0923 11:26:51.777689   43161 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0923 11:26:51.777700   43161 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0923 11:26:51.777708   43161 command_runner.go:130] > storage_option = [
	I0923 11:26:51.777715   43161 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0923 11:26:51.777723   43161 command_runner.go:130] > ]
	I0923 11:26:51.777732   43161 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0923 11:26:51.777745   43161 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0923 11:26:51.777754   43161 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0923 11:26:51.777766   43161 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0923 11:26:51.777778   43161 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0923 11:26:51.777788   43161 command_runner.go:130] > # always happen on a node reboot
	I0923 11:26:51.777798   43161 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0923 11:26:51.777814   43161 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0923 11:26:51.777826   43161 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0923 11:26:51.777837   43161 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0923 11:26:51.777848   43161 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0923 11:26:51.777862   43161 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0923 11:26:51.777878   43161 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0923 11:26:51.777887   43161 command_runner.go:130] > # internal_wipe = true
	I0923 11:26:51.777898   43161 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0923 11:26:51.777903   43161 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0923 11:26:51.777909   43161 command_runner.go:130] > # internal_repair = false
	I0923 11:26:51.777917   43161 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0923 11:26:51.777924   43161 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0923 11:26:51.777930   43161 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0923 11:26:51.777937   43161 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0923 11:26:51.777944   43161 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0923 11:26:51.777950   43161 command_runner.go:130] > [crio.api]
	I0923 11:26:51.777955   43161 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0923 11:26:51.777961   43161 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0923 11:26:51.777966   43161 command_runner.go:130] > # IP address on which the stream server will listen.
	I0923 11:26:51.777973   43161 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0923 11:26:51.777979   43161 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0923 11:26:51.777986   43161 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0923 11:26:51.777990   43161 command_runner.go:130] > # stream_port = "0"
	I0923 11:26:51.777997   43161 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0923 11:26:51.778001   43161 command_runner.go:130] > # stream_enable_tls = false
	I0923 11:26:51.778008   43161 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0923 11:26:51.778013   43161 command_runner.go:130] > # stream_idle_timeout = ""
	I0923 11:26:51.778023   43161 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0923 11:26:51.778031   43161 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0923 11:26:51.778036   43161 command_runner.go:130] > # minutes.
	I0923 11:26:51.778041   43161 command_runner.go:130] > # stream_tls_cert = ""
	I0923 11:26:51.778048   43161 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0923 11:26:51.778055   43161 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0923 11:26:51.778059   43161 command_runner.go:130] > # stream_tls_key = ""
	I0923 11:26:51.778067   43161 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0923 11:26:51.778075   43161 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0923 11:26:51.778087   43161 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0923 11:26:51.778094   43161 command_runner.go:130] > # stream_tls_ca = ""
	I0923 11:26:51.778101   43161 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0923 11:26:51.778109   43161 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0923 11:26:51.778119   43161 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0923 11:26:51.778125   43161 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0923 11:26:51.778131   43161 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0923 11:26:51.778138   43161 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0923 11:26:51.778142   43161 command_runner.go:130] > [crio.runtime]
	I0923 11:26:51.778147   43161 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0923 11:26:51.778154   43161 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0923 11:26:51.778158   43161 command_runner.go:130] > # "nofile=1024:2048"
	I0923 11:26:51.778166   43161 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0923 11:26:51.778172   43161 command_runner.go:130] > # default_ulimits = [
	I0923 11:26:51.778175   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778182   43161 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0923 11:26:51.778186   43161 command_runner.go:130] > # no_pivot = false
	I0923 11:26:51.778193   43161 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0923 11:26:51.778199   43161 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0923 11:26:51.778208   43161 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0923 11:26:51.778215   43161 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0923 11:26:51.778222   43161 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0923 11:26:51.778229   43161 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0923 11:26:51.778236   43161 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0923 11:26:51.778240   43161 command_runner.go:130] > # Cgroup setting for conmon
	I0923 11:26:51.778248   43161 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0923 11:26:51.778254   43161 command_runner.go:130] > conmon_cgroup = "pod"
	I0923 11:26:51.778260   43161 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0923 11:26:51.778266   43161 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0923 11:26:51.778272   43161 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0923 11:26:51.778279   43161 command_runner.go:130] > conmon_env = [
	I0923 11:26:51.778284   43161 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0923 11:26:51.778290   43161 command_runner.go:130] > ]
	I0923 11:26:51.778295   43161 command_runner.go:130] > # Additional environment variables to set for all the
	I0923 11:26:51.778302   43161 command_runner.go:130] > # containers. These are overridden if set in the
	I0923 11:26:51.778307   43161 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0923 11:26:51.778314   43161 command_runner.go:130] > # default_env = [
	I0923 11:26:51.778317   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778325   43161 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0923 11:26:51.778332   43161 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0923 11:26:51.778338   43161 command_runner.go:130] > # selinux = false
	I0923 11:26:51.778345   43161 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0923 11:26:51.778352   43161 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0923 11:26:51.778360   43161 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0923 11:26:51.778363   43161 command_runner.go:130] > # seccomp_profile = ""
	I0923 11:26:51.778371   43161 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0923 11:26:51.778376   43161 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0923 11:26:51.778384   43161 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0923 11:26:51.778388   43161 command_runner.go:130] > # which might increase security.
	I0923 11:26:51.778394   43161 command_runner.go:130] > # This option is currently deprecated,
	I0923 11:26:51.778400   43161 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0923 11:26:51.778406   43161 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0923 11:26:51.778412   43161 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0923 11:26:51.778419   43161 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0923 11:26:51.778426   43161 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0923 11:26:51.778433   43161 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0923 11:26:51.778439   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.778444   43161 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0923 11:26:51.778455   43161 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0923 11:26:51.778462   43161 command_runner.go:130] > # the cgroup blockio controller.
	I0923 11:26:51.778466   43161 command_runner.go:130] > # blockio_config_file = ""
	I0923 11:26:51.778475   43161 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0923 11:26:51.778479   43161 command_runner.go:130] > # blockio parameters.
	I0923 11:26:51.778483   43161 command_runner.go:130] > # blockio_reload = false
	I0923 11:26:51.778490   43161 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0923 11:26:51.778496   43161 command_runner.go:130] > # irqbalance daemon.
	I0923 11:26:51.778501   43161 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0923 11:26:51.778509   43161 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0923 11:26:51.778517   43161 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0923 11:26:51.778526   43161 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0923 11:26:51.778533   43161 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0923 11:26:51.778542   43161 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0923 11:26:51.778549   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.778552   43161 command_runner.go:130] > # rdt_config_file = ""
	I0923 11:26:51.778559   43161 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0923 11:26:51.778563   43161 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0923 11:26:51.778581   43161 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0923 11:26:51.778587   43161 command_runner.go:130] > # separate_pull_cgroup = ""
	I0923 11:26:51.778593   43161 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0923 11:26:51.778601   43161 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0923 11:26:51.778605   43161 command_runner.go:130] > # will be added.
	I0923 11:26:51.778609   43161 command_runner.go:130] > # default_capabilities = [
	I0923 11:26:51.778615   43161 command_runner.go:130] > # 	"CHOWN",
	I0923 11:26:51.778619   43161 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0923 11:26:51.778624   43161 command_runner.go:130] > # 	"FSETID",
	I0923 11:26:51.778628   43161 command_runner.go:130] > # 	"FOWNER",
	I0923 11:26:51.778634   43161 command_runner.go:130] > # 	"SETGID",
	I0923 11:26:51.778638   43161 command_runner.go:130] > # 	"SETUID",
	I0923 11:26:51.778644   43161 command_runner.go:130] > # 	"SETPCAP",
	I0923 11:26:51.778648   43161 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0923 11:26:51.778654   43161 command_runner.go:130] > # 	"KILL",
	I0923 11:26:51.778657   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778666   43161 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0923 11:26:51.778675   43161 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0923 11:26:51.778679   43161 command_runner.go:130] > # add_inheritable_capabilities = false
	I0923 11:26:51.778687   43161 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0923 11:26:51.778693   43161 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0923 11:26:51.778699   43161 command_runner.go:130] > default_sysctls = [
	I0923 11:26:51.778704   43161 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0923 11:26:51.778709   43161 command_runner.go:130] > ]
	I0923 11:26:51.778713   43161 command_runner.go:130] > # List of devices on the host that a
	I0923 11:26:51.778721   43161 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0923 11:26:51.778730   43161 command_runner.go:130] > # allowed_devices = [
	I0923 11:26:51.778736   43161 command_runner.go:130] > # 	"/dev/fuse",
	I0923 11:26:51.778744   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778751   43161 command_runner.go:130] > # List of additional devices. specified as
	I0923 11:26:51.778764   43161 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0923 11:26:51.778775   43161 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0923 11:26:51.778784   43161 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0923 11:26:51.778792   43161 command_runner.go:130] > # additional_devices = [
	I0923 11:26:51.778798   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778807   43161 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0923 11:26:51.778815   43161 command_runner.go:130] > # cdi_spec_dirs = [
	I0923 11:26:51.778821   43161 command_runner.go:130] > # 	"/etc/cdi",
	I0923 11:26:51.778825   43161 command_runner.go:130] > # 	"/var/run/cdi",
	I0923 11:26:51.778830   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778837   43161 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0923 11:26:51.778845   43161 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0923 11:26:51.778851   43161 command_runner.go:130] > # Defaults to false.
	I0923 11:26:51.778857   43161 command_runner.go:130] > # device_ownership_from_security_context = false
	I0923 11:26:51.778865   43161 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0923 11:26:51.778873   43161 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0923 11:26:51.778878   43161 command_runner.go:130] > # hooks_dir = [
	I0923 11:26:51.778883   43161 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0923 11:26:51.778888   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778893   43161 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0923 11:26:51.778901   43161 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0923 11:26:51.778908   43161 command_runner.go:130] > # its default mounts from the following two files:
	I0923 11:26:51.778911   43161 command_runner.go:130] > #
	I0923 11:26:51.778917   43161 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0923 11:26:51.778925   43161 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0923 11:26:51.778933   43161 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0923 11:26:51.778936   43161 command_runner.go:130] > #
	I0923 11:26:51.778941   43161 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0923 11:26:51.778949   43161 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0923 11:26:51.778965   43161 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0923 11:26:51.778972   43161 command_runner.go:130] > #      only add mounts it finds in this file.
	I0923 11:26:51.778978   43161 command_runner.go:130] > #
	I0923 11:26:51.778982   43161 command_runner.go:130] > # default_mounts_file = ""
	I0923 11:26:51.778989   43161 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0923 11:26:51.778995   43161 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0923 11:26:51.779001   43161 command_runner.go:130] > pids_limit = 1024
	I0923 11:26:51.779007   43161 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0923 11:26:51.779015   43161 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0923 11:26:51.779021   43161 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0923 11:26:51.779031   43161 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0923 11:26:51.779037   43161 command_runner.go:130] > # log_size_max = -1
	I0923 11:26:51.779043   43161 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0923 11:26:51.779049   43161 command_runner.go:130] > # log_to_journald = false
	I0923 11:26:51.779055   43161 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0923 11:26:51.779062   43161 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0923 11:26:51.779067   43161 command_runner.go:130] > # Path to directory for container attach sockets.
	I0923 11:26:51.779074   43161 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0923 11:26:51.779079   43161 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0923 11:26:51.779085   43161 command_runner.go:130] > # bind_mount_prefix = ""
	I0923 11:26:51.779090   43161 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0923 11:26:51.779096   43161 command_runner.go:130] > # read_only = false
	I0923 11:26:51.779102   43161 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0923 11:26:51.779110   43161 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0923 11:26:51.779116   43161 command_runner.go:130] > # live configuration reload.
	I0923 11:26:51.779120   43161 command_runner.go:130] > # log_level = "info"
	I0923 11:26:51.779127   43161 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0923 11:26:51.779132   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.779138   43161 command_runner.go:130] > # log_filter = ""
	I0923 11:26:51.779143   43161 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0923 11:26:51.779152   43161 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0923 11:26:51.779158   43161 command_runner.go:130] > # separated by comma.
	I0923 11:26:51.779165   43161 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 11:26:51.779171   43161 command_runner.go:130] > # uid_mappings = ""
	I0923 11:26:51.779177   43161 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0923 11:26:51.779185   43161 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0923 11:26:51.779189   43161 command_runner.go:130] > # separated by comma.
	I0923 11:26:51.779199   43161 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 11:26:51.779206   43161 command_runner.go:130] > # gid_mappings = ""
	I0923 11:26:51.779215   43161 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0923 11:26:51.779223   43161 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0923 11:26:51.779230   43161 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0923 11:26:51.779239   43161 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 11:26:51.779245   43161 command_runner.go:130] > # minimum_mappable_uid = -1
	I0923 11:26:51.779251   43161 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0923 11:26:51.779259   43161 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0923 11:26:51.779267   43161 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0923 11:26:51.779276   43161 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 11:26:51.779283   43161 command_runner.go:130] > # minimum_mappable_gid = -1
	I0923 11:26:51.779291   43161 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0923 11:26:51.779298   43161 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0923 11:26:51.779305   43161 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0923 11:26:51.779310   43161 command_runner.go:130] > # ctr_stop_timeout = 30
	I0923 11:26:51.779316   43161 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0923 11:26:51.779324   43161 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0923 11:26:51.779329   43161 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0923 11:26:51.779335   43161 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0923 11:26:51.779339   43161 command_runner.go:130] > drop_infra_ctr = false
	I0923 11:26:51.779347   43161 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0923 11:26:51.779352   43161 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0923 11:26:51.779361   43161 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0923 11:26:51.779368   43161 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0923 11:26:51.779374   43161 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0923 11:26:51.779382   43161 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0923 11:26:51.779390   43161 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0923 11:26:51.779395   43161 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0923 11:26:51.779401   43161 command_runner.go:130] > # shared_cpuset = ""
	I0923 11:26:51.779407   43161 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0923 11:26:51.779414   43161 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0923 11:26:51.779418   43161 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0923 11:26:51.779425   43161 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0923 11:26:51.779430   43161 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0923 11:26:51.779435   43161 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0923 11:26:51.779443   43161 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0923 11:26:51.779448   43161 command_runner.go:130] > # enable_criu_support = false
	I0923 11:26:51.779457   43161 command_runner.go:130] > # Enable/disable the generation of the container,
	I0923 11:26:51.779464   43161 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0923 11:26:51.779471   43161 command_runner.go:130] > # enable_pod_events = false
	I0923 11:26:51.779477   43161 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0923 11:26:51.779485   43161 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0923 11:26:51.779492   43161 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0923 11:26:51.779496   43161 command_runner.go:130] > # default_runtime = "runc"
	I0923 11:26:51.779503   43161 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0923 11:26:51.779510   43161 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0923 11:26:51.779520   43161 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0923 11:26:51.779527   43161 command_runner.go:130] > # creation as a file is not desired either.
	I0923 11:26:51.779536   43161 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0923 11:26:51.779551   43161 command_runner.go:130] > # the hostname is being managed dynamically.
	I0923 11:26:51.779557   43161 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0923 11:26:51.779560   43161 command_runner.go:130] > # ]
	I0923 11:26:51.779567   43161 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0923 11:26:51.779575   43161 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0923 11:26:51.779583   43161 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0923 11:26:51.779591   43161 command_runner.go:130] > # Each entry in the table should follow the format:
	I0923 11:26:51.779594   43161 command_runner.go:130] > #
	I0923 11:26:51.779598   43161 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0923 11:26:51.779605   43161 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0923 11:26:51.779623   43161 command_runner.go:130] > # runtime_type = "oci"
	I0923 11:26:51.779629   43161 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0923 11:26:51.779635   43161 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0923 11:26:51.779641   43161 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0923 11:26:51.779646   43161 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0923 11:26:51.779652   43161 command_runner.go:130] > # monitor_env = []
	I0923 11:26:51.779657   43161 command_runner.go:130] > # privileged_without_host_devices = false
	I0923 11:26:51.779663   43161 command_runner.go:130] > # allowed_annotations = []
	I0923 11:26:51.779668   43161 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0923 11:26:51.779675   43161 command_runner.go:130] > # Where:
	I0923 11:26:51.779680   43161 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0923 11:26:51.779688   43161 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0923 11:26:51.779694   43161 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0923 11:26:51.779702   43161 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0923 11:26:51.779707   43161 command_runner.go:130] > #   in $PATH.
	I0923 11:26:51.779713   43161 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0923 11:26:51.779719   43161 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0923 11:26:51.779726   43161 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0923 11:26:51.779735   43161 command_runner.go:130] > #   state.
	I0923 11:26:51.779745   43161 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0923 11:26:51.779757   43161 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0923 11:26:51.779770   43161 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0923 11:26:51.779779   43161 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0923 11:26:51.779790   43161 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0923 11:26:51.779803   43161 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0923 11:26:51.779813   43161 command_runner.go:130] > #   The currently recognized values are:
	I0923 11:26:51.779823   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0923 11:26:51.779835   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0923 11:26:51.779847   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0923 11:26:51.779859   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0923 11:26:51.779872   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0923 11:26:51.779881   43161 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0923 11:26:51.779890   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0923 11:26:51.779898   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0923 11:26:51.779904   43161 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0923 11:26:51.779913   43161 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0923 11:26:51.779920   43161 command_runner.go:130] > #   deprecated option "conmon".
	I0923 11:26:51.779926   43161 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0923 11:26:51.779933   43161 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0923 11:26:51.779940   43161 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0923 11:26:51.779947   43161 command_runner.go:130] > #   should be moved to the container's cgroup
	I0923 11:26:51.779953   43161 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0923 11:26:51.779960   43161 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0923 11:26:51.779966   43161 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0923 11:26:51.779973   43161 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0923 11:26:51.779977   43161 command_runner.go:130] > #
	I0923 11:26:51.779983   43161 command_runner.go:130] > # Using the seccomp notifier feature:
	I0923 11:26:51.779987   43161 command_runner.go:130] > #
	I0923 11:26:51.779994   43161 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0923 11:26:51.780002   43161 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0923 11:26:51.780009   43161 command_runner.go:130] > #
	I0923 11:26:51.780015   43161 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0923 11:26:51.780023   43161 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0923 11:26:51.780026   43161 command_runner.go:130] > #
	I0923 11:26:51.780034   43161 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0923 11:26:51.780040   43161 command_runner.go:130] > # feature.
	I0923 11:26:51.780043   43161 command_runner.go:130] > #
	I0923 11:26:51.780049   43161 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0923 11:26:51.780057   43161 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0923 11:26:51.780063   43161 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0923 11:26:51.780071   43161 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0923 11:26:51.780078   43161 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0923 11:26:51.780082   43161 command_runner.go:130] > #
	I0923 11:26:51.780090   43161 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0923 11:26:51.780097   43161 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0923 11:26:51.780102   43161 command_runner.go:130] > #
	I0923 11:26:51.780108   43161 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0923 11:26:51.780113   43161 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0923 11:26:51.780119   43161 command_runner.go:130] > #
	I0923 11:26:51.780125   43161 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0923 11:26:51.780133   43161 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0923 11:26:51.780138   43161 command_runner.go:130] > # limitation.
	I0923 11:26:51.780143   43161 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0923 11:26:51.780149   43161 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0923 11:26:51.780156   43161 command_runner.go:130] > runtime_type = "oci"
	I0923 11:26:51.780162   43161 command_runner.go:130] > runtime_root = "/run/runc"
	I0923 11:26:51.780167   43161 command_runner.go:130] > runtime_config_path = ""
	I0923 11:26:51.780173   43161 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0923 11:26:51.780177   43161 command_runner.go:130] > monitor_cgroup = "pod"
	I0923 11:26:51.780181   43161 command_runner.go:130] > monitor_exec_cgroup = ""
	I0923 11:26:51.780187   43161 command_runner.go:130] > monitor_env = [
	I0923 11:26:51.780193   43161 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0923 11:26:51.780198   43161 command_runner.go:130] > ]
	I0923 11:26:51.780204   43161 command_runner.go:130] > privileged_without_host_devices = false
	I0923 11:26:51.780212   43161 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0923 11:26:51.780219   43161 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0923 11:26:51.780225   43161 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0923 11:26:51.780234   43161 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0923 11:26:51.780243   43161 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0923 11:26:51.780251   43161 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0923 11:26:51.780260   43161 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0923 11:26:51.780269   43161 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0923 11:26:51.780277   43161 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0923 11:26:51.780284   43161 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0923 11:26:51.780290   43161 command_runner.go:130] > # Example:
	I0923 11:26:51.780294   43161 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0923 11:26:51.780301   43161 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0923 11:26:51.780306   43161 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0923 11:26:51.780313   43161 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0923 11:26:51.780316   43161 command_runner.go:130] > # cpuset = 0
	I0923 11:26:51.780320   43161 command_runner.go:130] > # cpushares = "0-1"
	I0923 11:26:51.780325   43161 command_runner.go:130] > # Where:
	I0923 11:26:51.780330   43161 command_runner.go:130] > # The workload name is workload-type.
	I0923 11:26:51.780338   43161 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0923 11:26:51.780345   43161 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0923 11:26:51.780350   43161 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0923 11:26:51.780360   43161 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0923 11:26:51.780367   43161 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0923 11:26:51.780372   43161 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0923 11:26:51.780381   43161 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0923 11:26:51.780387   43161 command_runner.go:130] > # Default value is set to true
	I0923 11:26:51.780391   43161 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0923 11:26:51.780397   43161 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0923 11:26:51.780404   43161 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0923 11:26:51.780408   43161 command_runner.go:130] > # Default value is set to 'false'
	I0923 11:26:51.780414   43161 command_runner.go:130] > # disable_hostport_mapping = false
	I0923 11:26:51.780421   43161 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0923 11:26:51.780425   43161 command_runner.go:130] > #
	I0923 11:26:51.780430   43161 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0923 11:26:51.780436   43161 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0923 11:26:51.780441   43161 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0923 11:26:51.780447   43161 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0923 11:26:51.780457   43161 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0923 11:26:51.780463   43161 command_runner.go:130] > [crio.image]
	I0923 11:26:51.780472   43161 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0923 11:26:51.780479   43161 command_runner.go:130] > # default_transport = "docker://"
	I0923 11:26:51.780488   43161 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0923 11:26:51.780498   43161 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0923 11:26:51.780504   43161 command_runner.go:130] > # global_auth_file = ""
	I0923 11:26:51.780511   43161 command_runner.go:130] > # The image used to instantiate infra containers.
	I0923 11:26:51.780518   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.780525   43161 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0923 11:26:51.780535   43161 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0923 11:26:51.780544   43161 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0923 11:26:51.780553   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.780559   43161 command_runner.go:130] > # pause_image_auth_file = ""
	I0923 11:26:51.780565   43161 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0923 11:26:51.780570   43161 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0923 11:26:51.780576   43161 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0923 11:26:51.780581   43161 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0923 11:26:51.780586   43161 command_runner.go:130] > # pause_command = "/pause"
	I0923 11:26:51.780595   43161 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0923 11:26:51.780604   43161 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0923 11:26:51.780616   43161 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0923 11:26:51.780631   43161 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0923 11:26:51.780643   43161 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0923 11:26:51.780655   43161 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0923 11:26:51.780664   43161 command_runner.go:130] > # pinned_images = [
	I0923 11:26:51.780672   43161 command_runner.go:130] > # ]
	I0923 11:26:51.780682   43161 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0923 11:26:51.780695   43161 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0923 11:26:51.780704   43161 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0923 11:26:51.780714   43161 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0923 11:26:51.780725   43161 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0923 11:26:51.780734   43161 command_runner.go:130] > # signature_policy = ""
	I0923 11:26:51.780743   43161 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0923 11:26:51.780755   43161 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0923 11:26:51.780766   43161 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0923 11:26:51.780776   43161 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0923 11:26:51.780786   43161 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0923 11:26:51.780794   43161 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0923 11:26:51.780804   43161 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0923 11:26:51.780814   43161 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0923 11:26:51.780824   43161 command_runner.go:130] > # changing them here.
	I0923 11:26:51.780830   43161 command_runner.go:130] > # insecure_registries = [
	I0923 11:26:51.780838   43161 command_runner.go:130] > # ]
	I0923 11:26:51.780849   43161 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0923 11:26:51.780861   43161 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0923 11:26:51.780870   43161 command_runner.go:130] > # image_volumes = "mkdir"
	I0923 11:26:51.780879   43161 command_runner.go:130] > # Temporary directory to use for storing big files
	I0923 11:26:51.780889   43161 command_runner.go:130] > # big_files_temporary_dir = ""
	I0923 11:26:51.780901   43161 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0923 11:26:51.780909   43161 command_runner.go:130] > # CNI plugins.
	I0923 11:26:51.780917   43161 command_runner.go:130] > [crio.network]
	I0923 11:26:51.780926   43161 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0923 11:26:51.780937   43161 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0923 11:26:51.780947   43161 command_runner.go:130] > # cni_default_network = ""
	I0923 11:26:51.780954   43161 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0923 11:26:51.780964   43161 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0923 11:26:51.780971   43161 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0923 11:26:51.780980   43161 command_runner.go:130] > # plugin_dirs = [
	I0923 11:26:51.780986   43161 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0923 11:26:51.780993   43161 command_runner.go:130] > # ]
	I0923 11:26:51.781001   43161 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0923 11:26:51.781009   43161 command_runner.go:130] > [crio.metrics]
	I0923 11:26:51.781017   43161 command_runner.go:130] > # Globally enable or disable metrics support.
	I0923 11:26:51.781025   43161 command_runner.go:130] > enable_metrics = true
	I0923 11:26:51.781032   43161 command_runner.go:130] > # Specify enabled metrics collectors.
	I0923 11:26:51.781042   43161 command_runner.go:130] > # Per default all metrics are enabled.
	I0923 11:26:51.781051   43161 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0923 11:26:51.781064   43161 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0923 11:26:51.781075   43161 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0923 11:26:51.781084   43161 command_runner.go:130] > # metrics_collectors = [
	I0923 11:26:51.781090   43161 command_runner.go:130] > # 	"operations",
	I0923 11:26:51.781100   43161 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0923 11:26:51.781119   43161 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0923 11:26:51.781128   43161 command_runner.go:130] > # 	"operations_errors",
	I0923 11:26:51.781139   43161 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0923 11:26:51.781147   43161 command_runner.go:130] > # 	"image_pulls_by_name",
	I0923 11:26:51.781154   43161 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0923 11:26:51.781160   43161 command_runner.go:130] > # 	"image_pulls_failures",
	I0923 11:26:51.781166   43161 command_runner.go:130] > # 	"image_pulls_successes",
	I0923 11:26:51.781171   43161 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0923 11:26:51.781177   43161 command_runner.go:130] > # 	"image_layer_reuse",
	I0923 11:26:51.781181   43161 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0923 11:26:51.781187   43161 command_runner.go:130] > # 	"containers_oom_total",
	I0923 11:26:51.781192   43161 command_runner.go:130] > # 	"containers_oom",
	I0923 11:26:51.781198   43161 command_runner.go:130] > # 	"processes_defunct",
	I0923 11:26:51.781202   43161 command_runner.go:130] > # 	"operations_total",
	I0923 11:26:51.781209   43161 command_runner.go:130] > # 	"operations_latency_seconds",
	I0923 11:26:51.781214   43161 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0923 11:26:51.781220   43161 command_runner.go:130] > # 	"operations_errors_total",
	I0923 11:26:51.781224   43161 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0923 11:26:51.781231   43161 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0923 11:26:51.781235   43161 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0923 11:26:51.781241   43161 command_runner.go:130] > # 	"image_pulls_success_total",
	I0923 11:26:51.781246   43161 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0923 11:26:51.781251   43161 command_runner.go:130] > # 	"containers_oom_count_total",
	I0923 11:26:51.781256   43161 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0923 11:26:51.781263   43161 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0923 11:26:51.781266   43161 command_runner.go:130] > # ]
	I0923 11:26:51.781273   43161 command_runner.go:130] > # The port on which the metrics server will listen.
	I0923 11:26:51.781277   43161 command_runner.go:130] > # metrics_port = 9090
	I0923 11:26:51.781284   43161 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0923 11:26:51.781288   43161 command_runner.go:130] > # metrics_socket = ""
	I0923 11:26:51.781295   43161 command_runner.go:130] > # The certificate for the secure metrics server.
	I0923 11:26:51.781301   43161 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0923 11:26:51.781309   43161 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0923 11:26:51.781314   43161 command_runner.go:130] > # certificate on any modification event.
	I0923 11:26:51.781320   43161 command_runner.go:130] > # metrics_cert = ""
	I0923 11:26:51.781324   43161 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0923 11:26:51.781332   43161 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0923 11:26:51.781336   43161 command_runner.go:130] > # metrics_key = ""
	I0923 11:26:51.781343   43161 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0923 11:26:51.781349   43161 command_runner.go:130] > [crio.tracing]
	I0923 11:26:51.781355   43161 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0923 11:26:51.781361   43161 command_runner.go:130] > # enable_tracing = false
	I0923 11:26:51.781366   43161 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0923 11:26:51.781373   43161 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0923 11:26:51.781397   43161 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0923 11:26:51.781407   43161 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0923 11:26:51.781411   43161 command_runner.go:130] > # CRI-O NRI configuration.
	I0923 11:26:51.781416   43161 command_runner.go:130] > [crio.nri]
	I0923 11:26:51.781420   43161 command_runner.go:130] > # Globally enable or disable NRI.
	I0923 11:26:51.781426   43161 command_runner.go:130] > # enable_nri = false
	I0923 11:26:51.781430   43161 command_runner.go:130] > # NRI socket to listen on.
	I0923 11:26:51.781437   43161 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0923 11:26:51.781441   43161 command_runner.go:130] > # NRI plugin directory to use.
	I0923 11:26:51.781448   43161 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0923 11:26:51.781456   43161 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0923 11:26:51.781463   43161 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0923 11:26:51.781468   43161 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0923 11:26:51.781474   43161 command_runner.go:130] > # nri_disable_connections = false
	I0923 11:26:51.781480   43161 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0923 11:26:51.781486   43161 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0923 11:26:51.781491   43161 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0923 11:26:51.781498   43161 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0923 11:26:51.781504   43161 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0923 11:26:51.781510   43161 command_runner.go:130] > [crio.stats]
	I0923 11:26:51.781515   43161 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0923 11:26:51.781522   43161 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0923 11:26:51.781530   43161 command_runner.go:130] > # stats_collection_period = 0
	I0923 11:26:51.781601   43161 cni.go:84] Creating CNI manager for ""
	I0923 11:26:51.781614   43161 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 11:26:51.781622   43161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:26:51.781641   43161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-399279 NodeName:multinode-399279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:26:51.781797   43161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-399279"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:26:51.781863   43161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:26:51.792175   43161 command_runner.go:130] > kubeadm
	I0923 11:26:51.792194   43161 command_runner.go:130] > kubectl
	I0923 11:26:51.792198   43161 command_runner.go:130] > kubelet
	I0923 11:26:51.792218   43161 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:26:51.792271   43161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:26:51.801665   43161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0923 11:26:51.818351   43161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:26:51.834930   43161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 11:26:51.851417   43161 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0923 11:26:51.855252   43161 command_runner.go:130] > 192.168.39.71	control-plane.minikube.internal
	I0923 11:26:51.855332   43161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:26:51.994265   43161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:26:52.009810   43161 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279 for IP: 192.168.39.71
	I0923 11:26:52.009842   43161 certs.go:194] generating shared ca certs ...
	I0923 11:26:52.009864   43161 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:26:52.010040   43161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 11:26:52.010078   43161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 11:26:52.010088   43161 certs.go:256] generating profile certs ...
	I0923 11:26:52.010162   43161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/client.key
	I0923 11:26:52.010219   43161 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.key.43f0afc4
	I0923 11:26:52.010256   43161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.key
	I0923 11:26:52.010267   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 11:26:52.010282   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 11:26:52.010296   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 11:26:52.010308   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 11:26:52.010320   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 11:26:52.010332   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 11:26:52.010345   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 11:26:52.010357   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 11:26:52.010409   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 11:26:52.010437   43161 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 11:26:52.010446   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:26:52.010468   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:26:52.010489   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:26:52.010510   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 11:26:52.010547   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:26:52.010596   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.010611   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.010623   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.011170   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:26:52.036640   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:26:52.063594   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:26:52.090000   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:26:52.114366   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 11:26:52.138524   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:26:52.163102   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:26:52.188818   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:26:52.212806   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 11:26:52.236411   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 11:26:52.260015   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:26:52.284616   43161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:26:52.301590   43161 ssh_runner.go:195] Run: openssl version
	I0923 11:26:52.307571   43161 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 11:26:52.307637   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 11:26:52.318493   43161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.322921   43161 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.323111   43161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.323152   43161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.328668   43161 command_runner.go:130] > 51391683
	I0923 11:26:52.328866   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 11:26:52.338526   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 11:26:52.349343   43161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.353729   43161 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.353784   43161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.353833   43161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.359428   43161 command_runner.go:130] > 3ec20f2e
	I0923 11:26:52.359496   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:26:52.369031   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:26:52.380531   43161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.384831   43161 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.385154   43161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.385196   43161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.390838   43161 command_runner.go:130] > b5213941
	I0923 11:26:52.391111   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:26:52.400596   43161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:26:52.405193   43161 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:26:52.405214   43161 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 11:26:52.405222   43161 command_runner.go:130] > Device: 253,1	Inode: 531240      Links: 1
	I0923 11:26:52.405231   43161 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 11:26:52.405240   43161 command_runner.go:130] > Access: 2024-09-23 11:20:02.448276730 +0000
	I0923 11:26:52.405247   43161 command_runner.go:130] > Modify: 2024-09-23 11:20:02.448276730 +0000
	I0923 11:26:52.405255   43161 command_runner.go:130] > Change: 2024-09-23 11:20:02.448276730 +0000
	I0923 11:26:52.405267   43161 command_runner.go:130] >  Birth: 2024-09-23 11:20:02.448276730 +0000
	I0923 11:26:52.405316   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:26:52.410731   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.410972   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:26:52.416442   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.416500   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:26:52.422025   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.422086   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:26:52.427562   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.427615   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:26:52.432853   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.433086   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:26:52.438374   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.438514   43161 kubeadm.go:392] StartCluster: {Name:multinode-399279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-399279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:26:52.438609   43161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 11:26:52.438665   43161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:26:52.473735   43161 command_runner.go:130] > 46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05
	I0923 11:26:52.473764   43161 command_runner.go:130] > ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d
	I0923 11:26:52.473773   43161 command_runner.go:130] > 87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598
	I0923 11:26:52.473789   43161 command_runner.go:130] > e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53
	I0923 11:26:52.473799   43161 command_runner.go:130] > d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698
	I0923 11:26:52.473825   43161 command_runner.go:130] > 03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880
	I0923 11:26:52.473847   43161 command_runner.go:130] > a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51
	I0923 11:26:52.473978   43161 command_runner.go:130] > 1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0
	I0923 11:26:52.475327   43161 cri.go:89] found id: "46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05"
	I0923 11:26:52.475343   43161 cri.go:89] found id: "ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d"
	I0923 11:26:52.475348   43161 cri.go:89] found id: "87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598"
	I0923 11:26:52.475356   43161 cri.go:89] found id: "e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53"
	I0923 11:26:52.475360   43161 cri.go:89] found id: "d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698"
	I0923 11:26:52.475367   43161 cri.go:89] found id: "03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880"
	I0923 11:26:52.475371   43161 cri.go:89] found id: "a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51"
	I0923 11:26:52.475378   43161 cri.go:89] found id: "1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0"
	I0923 11:26:52.475386   43161 cri.go:89] found id: ""
	I0923 11:26:52.475438   43161 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.592040764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090921592010671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff1ff64b-6d00-4d80-858f-4931a6e267a3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.592677866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aff8546a-ef5b-4c90-b04f-31d65180d313 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.592748592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aff8546a-ef5b-4c90-b04f-31d65180d313 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.593192311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a929bac2c9af35373b3a391ab80b12ef0d068e8c124c282385bbcfc3bd77afb,PodSandboxId:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727090852308854422,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2,PodSandboxId:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727090818762462565,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c,PodSandboxId:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727090818702671178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a19b4acbb744498adbb752bad81cf1628c0379904fb98dd9790531c6ad5773,PodSandboxId:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727090818647855560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f,PodSandboxId:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727090818593836937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f,PodSandboxId:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727090814758235821,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2,PodSandboxId:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727090814745556229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be,PodSandboxId:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727090814696482867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381,PodSandboxId:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727090814692118037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff8654a48e6ba12401df225da883e18d28906348b268bf358931d56e91dc3b3,PodSandboxId:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727090486849847026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05,PodSandboxId:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727090429314951830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d,PodSandboxId:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727090429314652753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598,PodSandboxId:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727090417171395008,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53,PodSandboxId:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727090416998741435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e
-093ad73e616e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880,PodSandboxId:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727090406031853770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698,PodSandboxId:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727090406038261398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51,PodSandboxId:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727090405954863898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0,PodSandboxId:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727090405939039362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aff8546a-ef5b-4c90-b04f-31d65180d313 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.634361818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a47b4de-5756-488f-a4df-31a0ea7c043f name=/runtime.v1.RuntimeService/Version
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.634459635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a47b4de-5756-488f-a4df-31a0ea7c043f name=/runtime.v1.RuntimeService/Version
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.635747185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e72aa1d-899d-4d35-8061-185b34b57e6d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.636209082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090921636185292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e72aa1d-899d-4d35-8061-185b34b57e6d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.636658367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c09091aa-8b73-440b-b468-f9462ec4a53c name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.636732186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c09091aa-8b73-440b-b468-f9462ec4a53c name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.637190248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a929bac2c9af35373b3a391ab80b12ef0d068e8c124c282385bbcfc3bd77afb,PodSandboxId:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727090852308854422,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2,PodSandboxId:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727090818762462565,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c,PodSandboxId:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727090818702671178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a19b4acbb744498adbb752bad81cf1628c0379904fb98dd9790531c6ad5773,PodSandboxId:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727090818647855560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f,PodSandboxId:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727090818593836937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f,PodSandboxId:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727090814758235821,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2,PodSandboxId:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727090814745556229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be,PodSandboxId:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727090814696482867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381,PodSandboxId:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727090814692118037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff8654a48e6ba12401df225da883e18d28906348b268bf358931d56e91dc3b3,PodSandboxId:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727090486849847026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05,PodSandboxId:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727090429314951830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d,PodSandboxId:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727090429314652753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598,PodSandboxId:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727090417171395008,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53,PodSandboxId:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727090416998741435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e
-093ad73e616e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880,PodSandboxId:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727090406031853770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698,PodSandboxId:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727090406038261398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51,PodSandboxId:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727090405954863898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0,PodSandboxId:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727090405939039362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c09091aa-8b73-440b-b468-f9462ec4a53c name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.685566108Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c86bc80-7b3b-4d64-b502-747df95669ff name=/runtime.v1.RuntimeService/Version
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.685676595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c86bc80-7b3b-4d64-b502-747df95669ff name=/runtime.v1.RuntimeService/Version
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.686863280Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=195e82b3-f2a5-4047-a929-e924370af243 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.687463836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090921687432299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=195e82b3-f2a5-4047-a929-e924370af243 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.688318970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bbebfb6-b9ce-4e99-b599-e89d2514d8fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.688392407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bbebfb6-b9ce-4e99-b599-e89d2514d8fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.688835593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a929bac2c9af35373b3a391ab80b12ef0d068e8c124c282385bbcfc3bd77afb,PodSandboxId:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727090852308854422,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2,PodSandboxId:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727090818762462565,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c,PodSandboxId:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727090818702671178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a19b4acbb744498adbb752bad81cf1628c0379904fb98dd9790531c6ad5773,PodSandboxId:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727090818647855560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f,PodSandboxId:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727090818593836937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f,PodSandboxId:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727090814758235821,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2,PodSandboxId:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727090814745556229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be,PodSandboxId:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727090814696482867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381,PodSandboxId:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727090814692118037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff8654a48e6ba12401df225da883e18d28906348b268bf358931d56e91dc3b3,PodSandboxId:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727090486849847026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05,PodSandboxId:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727090429314951830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d,PodSandboxId:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727090429314652753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598,PodSandboxId:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727090417171395008,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53,PodSandboxId:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727090416998741435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e
-093ad73e616e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880,PodSandboxId:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727090406031853770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698,PodSandboxId:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727090406038261398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51,PodSandboxId:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727090405954863898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0,PodSandboxId:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727090405939039362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bbebfb6-b9ce-4e99-b599-e89d2514d8fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.732271157Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9b00f4f-d4c7-48a1-8960-69ecc7238bcb name=/runtime.v1.RuntimeService/Version
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.732347080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9b00f4f-d4c7-48a1-8960-69ecc7238bcb name=/runtime.v1.RuntimeService/Version
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.733791269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06238456-e7fc-44d5-b26d-b8faef444b8d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.734400712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090921734376940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06238456-e7fc-44d5-b26d-b8faef444b8d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.735064885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbcb63e0-b483-4cbe-9ea3-a2a049599d3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.735138667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbcb63e0-b483-4cbe-9ea3-a2a049599d3a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:28:41 multinode-399279 crio[2719]: time="2024-09-23 11:28:41.735551833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a929bac2c9af35373b3a391ab80b12ef0d068e8c124c282385bbcfc3bd77afb,PodSandboxId:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727090852308854422,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2,PodSandboxId:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727090818762462565,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c,PodSandboxId:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727090818702671178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a19b4acbb744498adbb752bad81cf1628c0379904fb98dd9790531c6ad5773,PodSandboxId:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727090818647855560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f,PodSandboxId:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727090818593836937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f,PodSandboxId:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727090814758235821,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2,PodSandboxId:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727090814745556229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be,PodSandboxId:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727090814696482867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381,PodSandboxId:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727090814692118037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff8654a48e6ba12401df225da883e18d28906348b268bf358931d56e91dc3b3,PodSandboxId:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727090486849847026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05,PodSandboxId:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727090429314951830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d,PodSandboxId:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727090429314652753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598,PodSandboxId:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727090417171395008,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53,PodSandboxId:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727090416998741435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e
-093ad73e616e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880,PodSandboxId:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727090406031853770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698,PodSandboxId:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727090406038261398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51,PodSandboxId:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727090405954863898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0,PodSandboxId:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727090405939039362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbcb63e0-b483-4cbe-9ea3-a2a049599d3a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9a929bac2c9af       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9d6f4c17090e2       busybox-7dff88458-7b2xk
	8d372fd2cf2ff       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   323d824dc0d8c       kindnet-qcbts
	11c4c3aad6d39       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   effdb178fd9f7       coredns-7c65d6cfc9-czp4x
	76a19b4acbb74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   975e8f1a983c4       storage-provisioner
	1508d80d15a66       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   230baf8529f98       kube-proxy-fwq2c
	587c4f94f2349       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   8ea0d7e1e90ac       etcd-multinode-399279
	0920dd93b5fac       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   2b065e42ec7b8       kube-controller-manager-multinode-399279
	6ae00d08a26e7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   1f61542c4ba1f       kube-scheduler-multinode-399279
	aac4bf9cbc3d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   536b4b5268362       kube-apiserver-multinode-399279
	8ff8654a48e6b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   5475877e3bc02       busybox-7dff88458-7b2xk
	46e9fb7bc93a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   353752d7e9883       storage-provisioner
	ae8539595eedb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   8c07860c73cd5       coredns-7c65d6cfc9-czp4x
	87e705b8bdacd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   b70d53e90f5e8       kindnet-qcbts
	e0815b2e94fc6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   1a14ce18b6c36       kube-proxy-fwq2c
	d83ab98dc7840       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   f513a49252bbb       etcd-multinode-399279
	03f8f7a5a8d6b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   0a11ca8d6fc13       kube-scheduler-multinode-399279
	a957e4461eccd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   8250e1c93d6db       kube-apiserver-multinode-399279
	1dcdb01009263       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   b548ec2f049be       kube-controller-manager-multinode-399279
	
	
	==> coredns [11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48557 - 55257 "HINFO IN 2321312220502510881.4191824833847128527. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080795278s
	
	
	==> coredns [ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d] <==
	[INFO] 10.244.0.3:37249 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192168s
	[INFO] 10.244.0.3:34093 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072201s
	[INFO] 10.244.0.3:46390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003892s
	[INFO] 10.244.0.3:49193 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001238965s
	[INFO] 10.244.0.3:58221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045502s
	[INFO] 10.244.0.3:49543 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117993s
	[INFO] 10.244.0.3:34408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053576s
	[INFO] 10.244.1.2:46900 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164548s
	[INFO] 10.244.1.2:32935 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119032s
	[INFO] 10.244.1.2:39915 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126255s
	[INFO] 10.244.1.2:54010 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122174s
	[INFO] 10.244.0.3:42206 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108434s
	[INFO] 10.244.0.3:58877 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100649s
	[INFO] 10.244.0.3:44498 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068199s
	[INFO] 10.244.0.3:43306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071024s
	[INFO] 10.244.1.2:38445 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217153s
	[INFO] 10.244.1.2:50825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000239401s
	[INFO] 10.244.1.2:54085 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000258958s
	[INFO] 10.244.1.2:58058 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000306327s
	[INFO] 10.244.0.3:36145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085742s
	[INFO] 10.244.0.3:49426 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077936s
	[INFO] 10.244.0.3:45842 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000049338s
	[INFO] 10.244.0.3:40634 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00003727s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-399279
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-399279
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=multinode-399279
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_20_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:20:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-399279
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:28:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:26:57 +0000   Mon, 23 Sep 2024 11:20:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:26:57 +0000   Mon, 23 Sep 2024 11:20:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:26:57 +0000   Mon, 23 Sep 2024 11:20:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:26:57 +0000   Mon, 23 Sep 2024 11:20:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    multinode-399279
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf91b75d1c864569a929cf8d7636034b
	  System UUID:                cf91b75d-1c86-4569-a929-cf8d7636034b
	  Boot ID:                    eed2b87b-8697-43e2-9a45-7bd2f53d2e87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7b2xk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 coredns-7c65d6cfc9-czp4x                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m26s
	  kube-system                 etcd-multinode-399279                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m31s
	  kube-system                 kindnet-qcbts                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m26s
	  kube-system                 kube-apiserver-multinode-399279             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-controller-manager-multinode-399279    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-proxy-fwq2c                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-scheduler-multinode-399279             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m24s                  kube-proxy       
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m37s (x8 over 8m37s)  kubelet          Node multinode-399279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s (x8 over 8m37s)  kubelet          Node multinode-399279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s (x7 over 8m37s)  kubelet          Node multinode-399279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m31s                  kubelet          Node multinode-399279 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m31s                  kubelet          Node multinode-399279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s                  kubelet          Node multinode-399279 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m31s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m27s                  node-controller  Node multinode-399279 event: Registered Node multinode-399279 in Controller
	  Normal  NodeReady                8m14s                  kubelet          Node multinode-399279 status is now: NodeReady
	  Normal  Starting                 109s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)    kubelet          Node multinode-399279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)    kubelet          Node multinode-399279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)    kubelet          Node multinode-399279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           101s                   node-controller  Node multinode-399279 event: Registered Node multinode-399279 in Controller
	
	
	Name:               multinode-399279-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-399279-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=multinode-399279
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T11_27_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:27:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-399279-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:28:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:28:11 +0000   Mon, 23 Sep 2024 11:27:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:28:11 +0000   Mon, 23 Sep 2024 11:27:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:28:11 +0000   Mon, 23 Sep 2024 11:27:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:28:11 +0000   Mon, 23 Sep 2024 11:27:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    multinode-399279-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22d342f73c814c358b43cd34890b5f63
	  System UUID:                22d342f7-3c81-4c35-8b43-cd34890b5f63
	  Boot ID:                    2522cf5f-f7e8-470f-a435-11dbe540dbcc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4xxfg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-84zhl              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m41s
	  kube-system                 kube-proxy-pdcm9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  Starting                 7m36s                  kube-proxy  
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m41s (x2 over 7m42s)  kubelet     Node multinode-399279-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s (x2 over 7m42s)  kubelet     Node multinode-399279-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m41s (x2 over 7m42s)  kubelet     Node multinode-399279-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                7m21s                  kubelet     Node multinode-399279-m02 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-399279-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-399279-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-399279-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                43s                    kubelet     Node multinode-399279-m02 status is now: NodeReady
	
	
	Name:               multinode-399279-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-399279-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=multinode-399279
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T11_28_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:28:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-399279-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:28:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:28:38 +0000   Mon, 23 Sep 2024 11:28:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:28:38 +0000   Mon, 23 Sep 2024 11:28:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:28:38 +0000   Mon, 23 Sep 2024 11:28:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:28:38 +0000   Mon, 23 Sep 2024 11:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.138
	  Hostname:    multinode-399279-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ef303eb75ee4fd09b57386912cb9822
	  System UUID:                3ef303eb-75ee-4fd0-9b57-386912cb9822
	  Boot ID:                    12a539ac-583c-49a5-bc87-4d907fe79281
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-f6k8p       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m40s
	  kube-system                 kube-proxy-fxxlf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m34s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m44s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m40s (x2 over 6m40s)  kubelet     Node multinode-399279-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x2 over 6m40s)  kubelet     Node multinode-399279-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x2 over 6m40s)  kubelet     Node multinode-399279-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m20s                  kubelet     Node multinode-399279-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet     Node multinode-399279-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet     Node multinode-399279-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet     Node multinode-399279-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m49s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m29s                  kubelet     Node multinode-399279-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  24s (x2 over 24s)      kubelet     Node multinode-399279-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x2 over 24s)      kubelet     Node multinode-399279-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x2 over 24s)      kubelet     Node multinode-399279-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-399279-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061864] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.167738] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.146097] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.275460] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[Sep23 11:20] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.427023] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.064317] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.485710] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.085617] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.206175] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.133414] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.878893] kauditd_printk_skb: 69 callbacks suppressed
	[Sep23 11:21] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 11:26] systemd-fstab-generator[2644]: Ignoring "noauto" option for root device
	[  +0.162670] systemd-fstab-generator[2656]: Ignoring "noauto" option for root device
	[  +0.175540] systemd-fstab-generator[2670]: Ignoring "noauto" option for root device
	[  +0.133471] systemd-fstab-generator[2682]: Ignoring "noauto" option for root device
	[  +0.290395] systemd-fstab-generator[2710]: Ignoring "noauto" option for root device
	[  +1.197489] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +1.862409] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +4.770005] kauditd_printk_skb: 184 callbacks suppressed
	[Sep23 11:27] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.108149] kauditd_printk_skb: 36 callbacks suppressed
	[ +16.963559] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f] <==
	{"level":"info","ts":"2024-09-23T11:26:55.168575Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T11:26:55.168797Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"226d7ac4e2309206","initial-advertise-peer-urls":["https://192.168.39.71:2380"],"listen-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.71:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T11:26:55.168837Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T11:26:55.168890Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"226d7ac4e2309206","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-23T11:26:55.172111Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:26:55.174001Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:26:55.174043Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:26:55.174828Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-23T11:26:55.174874Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-23T11:26:55.808048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T11:26:55.808108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:26:55.808150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgPreVoteResp from 226d7ac4e2309206 at term 2"}
	{"level":"info","ts":"2024-09-23T11:26:55.808164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T11:26:55.808170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgVoteResp from 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2024-09-23T11:26:55.808179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T11:26:55.808186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226d7ac4e2309206 elected leader 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2024-09-23T11:26:55.818244Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"226d7ac4e2309206","local-member-attributes":"{Name:multinode-399279 ClientURLs:[https://192.168.39.71:2379]}","request-path":"/0/members/226d7ac4e2309206/attributes","cluster-id":"98fbf1e9ed6d9a6e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:26:55.818377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:26:55.819620Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:26:55.822766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.71:2379"}
	{"level":"info","ts":"2024-09-23T11:26:55.820010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:26:55.826024Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:26:55.841347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:26:55.844045Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:26:55.845656Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698] <==
	{"level":"info","ts":"2024-09-23T11:21:07.381853Z","caller":"traceutil/trace.go:171","msg":"trace[922763614] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:510; }","duration":"366.656121ms","start":"2024-09-23T11:21:07.015189Z","end":"2024-09-23T11:21:07.381845Z","steps":["trace[922763614] 'agreement among raft nodes before linearized reading'  (duration: 366.606671ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:21:07.381749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.432943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-399279-m02\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-23T11:21:07.382346Z","caller":"traceutil/trace.go:171","msg":"trace[933645049] range","detail":"{range_begin:/registry/minions/multinode-399279-m02; range_end:; response_count:1; response_revision:510; }","duration":"125.041413ms","start":"2024-09-23T11:21:07.257296Z","end":"2024-09-23T11:21:07.382338Z","steps":["trace[933645049] 'agreement among raft nodes before linearized reading'  (duration: 124.408071ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:21:09.836849Z","caller":"traceutil/trace.go:171","msg":"trace[1974765449] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"165.50944ms","start":"2024-09-23T11:21:09.671325Z","end":"2024-09-23T11:21:09.836834Z","steps":["trace[1974765449] 'process raft request'  (duration: 165.239447ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:22:02.343234Z","caller":"traceutil/trace.go:171","msg":"trace[1251979350] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"229.692278ms","start":"2024-09-23T11:22:02.113509Z","end":"2024-09-23T11:22:02.343202Z","steps":["trace[1251979350] 'process raft request'  (duration: 140.707509ms)","trace[1251979350] 'compare'  (duration: 88.578407ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:22:04.226158Z","caller":"traceutil/trace.go:171","msg":"trace[1295102027] linearizableReadLoop","detail":"{readStateIndex:672; appliedIndex:671; }","duration":"126.914634ms","start":"2024-09-23T11:22:04.099219Z","end":"2024-09-23T11:22:04.226134Z","steps":["trace[1295102027] 'read index received'  (duration: 126.675435ms)","trace[1295102027] 'applied index is now lower than readState.Index'  (duration: 238.69µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:22:04.226257Z","caller":"traceutil/trace.go:171","msg":"trace[830481837] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"156.556745ms","start":"2024-09-23T11:22:04.069692Z","end":"2024-09-23T11:22:04.226249Z","steps":["trace[830481837] 'process raft request'  (duration: 156.243309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:22:04.226632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.385138ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-09-23T11:22:04.226699Z","caller":"traceutil/trace.go:171","msg":"trace[367210171] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:638; }","duration":"127.467922ms","start":"2024-09-23T11:22:04.099215Z","end":"2024-09-23T11:22:04.226683Z","steps":["trace[367210171] 'agreement among raft nodes before linearized reading'  (duration: 127.321016ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:22:04.471674Z","caller":"traceutil/trace.go:171","msg":"trace[1964086718] linearizableReadLoop","detail":"{readStateIndex:673; appliedIndex:672; }","duration":"237.377575ms","start":"2024-09-23T11:22:04.234261Z","end":"2024-09-23T11:22:04.471638Z","steps":["trace[1964086718] 'read index received'  (duration: 232.587623ms)","trace[1964086718] 'applied index is now lower than readState.Index'  (duration: 4.789342ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:22:04.471876Z","caller":"traceutil/trace.go:171","msg":"trace[1385562886] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"238.632541ms","start":"2024-09-23T11:22:04.233232Z","end":"2024-09-23T11:22:04.471865Z","steps":["trace[1385562886] 'process raft request'  (duration: 233.665946ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:22:04.472182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.907588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-f6k8p\" ","response":"range_response_count:1 size:3703"}
	{"level":"info","ts":"2024-09-23T11:22:04.472229Z","caller":"traceutil/trace.go:171","msg":"trace[1010372944] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-f6k8p; range_end:; response_count:1; response_revision:639; }","duration":"237.963059ms","start":"2024-09-23T11:22:04.234256Z","end":"2024-09-23T11:22:04.472220Z","steps":["trace[1010372944] 'agreement among raft nodes before linearized reading'  (duration: 237.828269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:22:04.472420Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.381297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-399279-m03\" ","response":"range_response_count:1 size:2824"}
	{"level":"info","ts":"2024-09-23T11:22:04.472464Z","caller":"traceutil/trace.go:171","msg":"trace[1107095018] range","detail":"{range_begin:/registry/minions/multinode-399279-m03; range_end:; response_count:1; response_revision:639; }","duration":"127.42897ms","start":"2024-09-23T11:22:04.345029Z","end":"2024-09-23T11:22:04.472458Z","steps":["trace[1107095018] 'agreement among raft nodes before linearized reading'  (duration: 127.320061ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:25:18.590062Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T11:25:18.590192Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-399279","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"]}
	{"level":"warn","ts":"2024-09-23T11:25:18.590322Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:25:18.590424Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:25:18.668285Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.71:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:25:18.668376Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.71:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T11:25:18.668571Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"226d7ac4e2309206","current-leader-member-id":"226d7ac4e2309206"}
	{"level":"info","ts":"2024-09-23T11:25:18.671641Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-23T11:25:18.671883Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-23T11:25:18.672037Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-399279","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"]}
	
	
	==> kernel <==
	 11:28:42 up 9 min,  0 users,  load average: 0.34, 0.30, 0.17
	Linux multinode-399279 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598] <==
	I0923 11:24:38.370664       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:24:48.370127       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:24:48.370194       1 main.go:299] handling current node
	I0923 11:24:48.370214       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:24:48.370220       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:24:48.370402       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:24:48.370429       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:24:58.371349       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:24:58.371439       1 main.go:299] handling current node
	I0923 11:24:58.371458       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:24:58.371466       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:24:58.371649       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:24:58.371673       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:25:08.365809       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:25:08.365925       1 main.go:299] handling current node
	I0923 11:25:08.366037       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:25:08.366073       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:25:08.366232       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:25:08.366254       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:25:18.363218       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:25:18.363261       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:25:18.363357       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:25:18.363382       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:25:18.363496       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:25:18.363538       1 main.go:299] handling current node
	
	
	==> kindnet [8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2] <==
	I0923 11:27:59.756130       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:28:09.762361       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:28:09.762498       1 main.go:299] handling current node
	I0923 11:28:09.762552       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:28:09.762572       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:28:09.762731       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:28:09.762753       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:28:19.755935       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:28:19.756127       1 main.go:299] handling current node
	I0923 11:28:19.756166       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:28:19.756595       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:28:19.756769       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:28:19.756796       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.2.0/24] 
	I0923 11:28:29.756545       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:28:29.756665       1 main.go:299] handling current node
	I0923 11:28:29.757102       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:28:29.757387       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:28:29.757754       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:28:29.757803       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.2.0/24] 
	I0923 11:28:39.756843       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:28:39.756924       1 main.go:299] handling current node
	I0923 11:28:39.756940       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:28:39.756946       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:28:39.757169       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:28:39.757196       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51] <==
	W0923 11:25:18.626503       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:25:18.626648       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:25:18.627012       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I0923 11:25:18.627478       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0923 11:25:18.627668       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0923 11:25:18.627854       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0923 11:25:18.627907       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0923 11:25:18.627939       1 controller.go:132] Ending legacy_token_tracking_controller
	I0923 11:25:18.628048       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0923 11:25:18.628081       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0923 11:25:18.628172       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0923 11:25:18.628239       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0923 11:25:18.628292       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0923 11:25:18.628320       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0923 11:25:18.628443       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0923 11:25:18.628600       1 naming_controller.go:305] Shutting down NamingConditionController
	I0923 11:25:18.628659       1 controller.go:170] Shutting down OpenAPI controller
	I0923 11:25:18.628731       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0923 11:25:18.628915       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0923 11:25:18.629037       1 establishing_controller.go:92] Shutting down EstablishingController
	I0923 11:25:18.630043       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 11:25:18.630923       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 11:25:18.631015       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 11:25:18.631038       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0923 11:25:18.631108       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	
	
	==> kube-apiserver [aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381] <==
	I0923 11:26:57.777878       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:26:57.778022       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 11:26:57.778190       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 11:26:57.779368       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 11:26:57.779880       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 11:26:57.780165       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 11:26:57.780286       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 11:26:57.784316       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:26:57.784466       1 policy_source.go:224] refreshing policies
	I0923 11:26:57.794889       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 11:26:57.795074       1 aggregator.go:171] initial CRD sync complete...
	I0923 11:26:57.795112       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 11:26:57.795135       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 11:26:57.795158       1 cache.go:39] Caches are synced for autoregister controller
	I0923 11:26:57.802628       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0923 11:26:57.807797       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 11:26:57.841210       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:26:58.694664       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 11:27:00.000810       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 11:27:00.111327       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 11:27:00.123684       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 11:27:00.201259       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 11:27:00.211465       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 11:27:01.279594       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 11:27:01.473754       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2] <==
	I0923 11:27:59.949711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:27:59.961344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.567µs"
	I0923 11:27:59.977445       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.927µs"
	I0923 11:28:01.114882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:28:04.114532       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.653559ms"
	I0923 11:28:04.115525       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.737µs"
	I0923 11:28:11.736913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:28:17.744608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:17.776108       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:18.003923       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:28:18.004590       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:18.992224       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:28:18.992274       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-399279-m03\" does not exist"
	I0923 11:28:19.001778       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-399279-m03" podCIDRs=["10.244.2.0/24"]
	I0923 11:28:19.001819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:19.001841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:19.011849       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:19.037929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:19.386396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:21.209327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:29.308349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:38.846854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:38.847059       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:28:38.858897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:41.137882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	
	
	==> kube-controller-manager [1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0] <==
	I0923 11:22:52.235422       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:22:52.235585       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.201944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:22:53.206466       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-399279-m03\" does not exist"
	I0923 11:22:53.214410       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-399279-m03" podCIDRs=["10.244.3.0/24"]
	I0923 11:22:53.214662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.215314       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.224699       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.278703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.611952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:55.681774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:03.503374       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:13.072889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:23:13.072950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:13.084686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:15.641474       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:55.660546       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:23:55.660736       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:55.665340       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:23:55.696395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:55.705625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:23:55.790294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.850805ms"
	I0923 11:23:55.790564       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="158.284µs"
	I0923 11:24:00.797681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:24:10.875869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	
	
	==> kube-proxy [1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:26:59.050299       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 11:26:59.075594       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.71"]
	E0923 11:26:59.076120       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:26:59.169155       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:26:59.169201       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:26:59.169229       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:26:59.173218       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:26:59.173467       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:26:59.173494       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:26:59.177608       1 config.go:199] "Starting service config controller"
	I0923 11:26:59.177644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:26:59.178806       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:26:59.178832       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:26:59.180638       1 config.go:328] "Starting node config controller"
	I0923 11:26:59.180747       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:26:59.279354       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:26:59.279439       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:26:59.280807       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:20:17.522630       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 11:20:17.536227       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.71"]
	E0923 11:20:17.536343       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:20:17.575879       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:20:17.575938       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:20:17.576018       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:20:17.578651       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:20:17.579185       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:20:17.579213       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:20:17.581037       1 config.go:199] "Starting service config controller"
	I0923 11:20:17.581066       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:20:17.581090       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:20:17.581094       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:20:17.581451       1 config.go:328] "Starting node config controller"
	I0923 11:20:17.581482       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:20:17.681858       1 shared_informer.go:320] Caches are synced for node config
	I0923 11:20:17.681892       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:20:17.681907       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880] <==
	E0923 11:20:08.627433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:08.627688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:20:08.627722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.473093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:20:09.473204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.522772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:20:09.523460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.638000       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 11:20:09.639326       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 11:20:09.679617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:20:09.679843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.681245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 11:20:09.681360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.689302       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:20:09.689410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.699339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:20:09.699437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.773052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:20:09.774347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.823156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:20:09.823466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0923 11:20:11.816949       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:25:18.596877       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0923 11:25:18.597057       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0923 11:25:18.600086       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be] <==
	I0923 11:26:55.653160       1 serving.go:386] Generated self-signed cert in-memory
	W0923 11:26:57.717360       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 11:26:57.717565       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 11:26:57.717649       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 11:26:57.717678       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 11:26:57.777665       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 11:26:57.777716       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:26:57.791443       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 11:26:57.791506       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:26:57.794208       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 11:26:57.794289       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 11:26:57.892264       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:27:04 multinode-399279 kubelet[2933]: E0923 11:27:04.033135    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090824032495072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:04 multinode-399279 kubelet[2933]: E0923 11:27:04.033179    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090824032495072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:14 multinode-399279 kubelet[2933]: E0923 11:27:14.035177    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090834034712060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:14 multinode-399279 kubelet[2933]: E0923 11:27:14.035201    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090834034712060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:24 multinode-399279 kubelet[2933]: E0923 11:27:24.038838    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090844037425696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:24 multinode-399279 kubelet[2933]: E0923 11:27:24.045594    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090844037425696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:34 multinode-399279 kubelet[2933]: E0923 11:27:34.046476    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090854046300935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:34 multinode-399279 kubelet[2933]: E0923 11:27:34.046500    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090854046300935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:44 multinode-399279 kubelet[2933]: E0923 11:27:44.047844    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090864047562347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:44 multinode-399279 kubelet[2933]: E0923 11:27:44.047867    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090864047562347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:54 multinode-399279 kubelet[2933]: E0923 11:27:54.051781    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090874051296892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:54 multinode-399279 kubelet[2933]: E0923 11:27:54.051836    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090874051296892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:27:54 multinode-399279 kubelet[2933]: E0923 11:27:54.057841    2933 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 11:27:54 multinode-399279 kubelet[2933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 11:27:54 multinode-399279 kubelet[2933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 11:27:54 multinode-399279 kubelet[2933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 11:27:54 multinode-399279 kubelet[2933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 11:28:04 multinode-399279 kubelet[2933]: E0923 11:28:04.055355    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090884054451258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:28:04 multinode-399279 kubelet[2933]: E0923 11:28:04.055915    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090884054451258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:28:14 multinode-399279 kubelet[2933]: E0923 11:28:14.057790    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090894057502130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:28:14 multinode-399279 kubelet[2933]: E0923 11:28:14.057816    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090894057502130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:28:24 multinode-399279 kubelet[2933]: E0923 11:28:24.059181    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090904058839971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:28:24 multinode-399279 kubelet[2933]: E0923 11:28:24.060107    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090904058839971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:28:34 multinode-399279 kubelet[2933]: E0923 11:28:34.063037    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090914062482881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:28:34 multinode-399279 kubelet[2933]: E0923 11:28:34.063386    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090914062482881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:28:41.311092   44308 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19689-3961/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-399279 -n multinode-399279
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-399279 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 stop
E0923 11:28:58.500682   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:00.507106   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:29:15.431951   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-399279 stop: exit status 82 (2m0.465371864s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-399279-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-399279 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status
E0923 11:30:57.440328   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-399279 status: (18.699945867s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr: (3.360246404s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-399279 -n multinode-399279
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-399279 logs -n 25: (1.45723603s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m02:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279:/home/docker/cp-test_multinode-399279-m02_multinode-399279.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279 sudo cat                                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m02_multinode-399279.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m02:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03:/home/docker/cp-test_multinode-399279-m02_multinode-399279-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279-m03 sudo cat                                   | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m02_multinode-399279-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp testdata/cp-test.txt                                                | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2040024565/001/cp-test_multinode-399279-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279:/home/docker/cp-test_multinode-399279-m03_multinode-399279.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279 sudo cat                                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m03_multinode-399279.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt                       | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02:/home/docker/cp-test_multinode-399279-m03_multinode-399279-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279-m02 sudo cat                                   | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m03_multinode-399279-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-399279 node stop m03                                                          | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	| node    | multinode-399279 node start                                                             | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-399279                                                                | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:23 UTC |                     |
	| stop    | -p multinode-399279                                                                     | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:23 UTC |                     |
	| start   | -p multinode-399279                                                                     | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-399279                                                                | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:28 UTC |                     |
	| node    | multinode-399279 node delete                                                            | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-399279 stop                                                                   | multinode-399279 | jenkins | v1.34.0 | 23 Sep 24 11:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:25:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:25:17.643296   43161 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:25:17.643548   43161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:25:17.643558   43161 out.go:358] Setting ErrFile to fd 2...
	I0923 11:25:17.643562   43161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:25:17.643734   43161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:25:17.644257   43161 out.go:352] Setting JSON to false
	I0923 11:25:17.645140   43161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4061,"bootTime":1727086657,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:25:17.645235   43161 start.go:139] virtualization: kvm guest
	I0923 11:25:17.648201   43161 out.go:177] * [multinode-399279] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:25:17.649601   43161 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:25:17.649605   43161 notify.go:220] Checking for updates...
	I0923 11:25:17.651084   43161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:25:17.652560   43161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:25:17.653672   43161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:25:17.654822   43161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:25:17.656325   43161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:25:17.658150   43161 config.go:182] Loaded profile config "multinode-399279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:25:17.658283   43161 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:25:17.658954   43161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:25:17.659009   43161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:25:17.675358   43161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I0923 11:25:17.675823   43161 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:25:17.676357   43161 main.go:141] libmachine: Using API Version  1
	I0923 11:25:17.676378   43161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:25:17.676717   43161 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:25:17.676913   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:25:17.711972   43161 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 11:25:17.713147   43161 start.go:297] selected driver: kvm2
	I0923 11:25:17.713161   43161 start.go:901] validating driver "kvm2" against &{Name:multinode-399279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-399279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:25:17.713321   43161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:25:17.713776   43161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:25:17.713870   43161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:25:17.728386   43161 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:25:17.729063   43161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:25:17.729090   43161 cni.go:84] Creating CNI manager for ""
	I0923 11:25:17.729137   43161 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 11:25:17.729194   43161 start.go:340] cluster config:
	{Name:multinode-399279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-399279 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:25:17.729344   43161 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:25:17.731024   43161 out.go:177] * Starting "multinode-399279" primary control-plane node in "multinode-399279" cluster
	I0923 11:25:17.732078   43161 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:25:17.732120   43161 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 11:25:17.732127   43161 cache.go:56] Caching tarball of preloaded images
	I0923 11:25:17.732210   43161 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 11:25:17.732223   43161 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 11:25:17.732355   43161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/config.json ...
	I0923 11:25:17.732601   43161 start.go:360] acquireMachinesLock for multinode-399279: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:25:17.732646   43161 start.go:364] duration metric: took 25.789µs to acquireMachinesLock for "multinode-399279"
	I0923 11:25:17.732660   43161 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:25:17.732665   43161 fix.go:54] fixHost starting: 
	I0923 11:25:17.732918   43161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:25:17.732947   43161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:25:17.747109   43161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0923 11:25:17.747543   43161 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:25:17.748038   43161 main.go:141] libmachine: Using API Version  1
	I0923 11:25:17.748056   43161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:25:17.748378   43161 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:25:17.748616   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:25:17.748774   43161 main.go:141] libmachine: (multinode-399279) Calling .GetState
	I0923 11:25:17.750248   43161 fix.go:112] recreateIfNeeded on multinode-399279: state=Running err=<nil>
	W0923 11:25:17.750265   43161 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:25:17.752010   43161 out.go:177] * Updating the running kvm2 "multinode-399279" VM ...
	I0923 11:25:17.753106   43161 machine.go:93] provisionDockerMachine start ...
	I0923 11:25:17.753124   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:25:17.753297   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:17.755684   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.756070   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:17.756095   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.756173   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:17.756347   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.756517   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.756673   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:17.756824   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:25:17.757020   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:25:17.757033   43161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:25:17.866595   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-399279
	
	I0923 11:25:17.866629   43161 main.go:141] libmachine: (multinode-399279) Calling .GetMachineName
	I0923 11:25:17.866848   43161 buildroot.go:166] provisioning hostname "multinode-399279"
	I0923 11:25:17.866874   43161 main.go:141] libmachine: (multinode-399279) Calling .GetMachineName
	I0923 11:25:17.867056   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:17.870010   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.870433   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:17.870454   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.870638   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:17.870822   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.870965   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.871096   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:17.871276   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:25:17.871445   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:25:17.871459   43161 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-399279 && echo "multinode-399279" | sudo tee /etc/hostname
	I0923 11:25:17.994229   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-399279
	
	I0923 11:25:17.994287   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:17.996842   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.997303   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:17.997328   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:17.997515   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:17.997713   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.997862   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:17.997981   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:17.998139   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:25:17.998328   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:25:17.998344   43161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-399279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-399279/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-399279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:25:18.106312   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:25:18.106348   43161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 11:25:18.106377   43161 buildroot.go:174] setting up certificates
	I0923 11:25:18.106389   43161 provision.go:84] configureAuth start
	I0923 11:25:18.106397   43161 main.go:141] libmachine: (multinode-399279) Calling .GetMachineName
	I0923 11:25:18.106647   43161 main.go:141] libmachine: (multinode-399279) Calling .GetIP
	I0923 11:25:18.109144   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.109530   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:18.109556   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.109711   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:18.111747   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.112146   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:18.112167   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.112249   43161 provision.go:143] copyHostCerts
	I0923 11:25:18.112279   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:25:18.112312   43161 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 11:25:18.112326   43161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:25:18.112395   43161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 11:25:18.112490   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:25:18.112517   43161 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 11:25:18.112528   43161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:25:18.112570   43161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 11:25:18.112628   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:25:18.112645   43161 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 11:25:18.112651   43161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:25:18.112675   43161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 11:25:18.112721   43161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.multinode-399279 san=[127.0.0.1 192.168.39.71 localhost minikube multinode-399279]
	I0923 11:25:18.291323   43161 provision.go:177] copyRemoteCerts
	I0923 11:25:18.291391   43161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:25:18.291419   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:18.294125   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.294385   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:18.294405   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.294567   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:18.294728   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:18.294864   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:18.294966   43161 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:25:18.380924   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 11:25:18.380990   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:25:18.408331   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 11:25:18.408407   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 11:25:18.432961   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 11:25:18.433027   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:25:18.458442   43161 provision.go:87] duration metric: took 352.041262ms to configureAuth
	I0923 11:25:18.458466   43161 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:25:18.458663   43161 config.go:182] Loaded profile config "multinode-399279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:25:18.458731   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:25:18.461353   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.461710   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:25:18.461739   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:25:18.461928   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:25:18.462129   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:18.462310   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:25:18.462452   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:25:18.462635   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:25:18.462806   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:25:18.462821   43161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 11:26:49.262238   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 11:26:49.262268   43161 machine.go:96] duration metric: took 1m31.509149402s to provisionDockerMachine
	I0923 11:26:49.262281   43161 start.go:293] postStartSetup for "multinode-399279" (driver="kvm2")
	I0923 11:26:49.262292   43161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:26:49.262314   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.262672   43161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:26:49.262699   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:26:49.265711   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.266146   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.266174   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.266480   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:26:49.266694   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.266894   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:26:49.267070   43161 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:26:49.352798   43161 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:26:49.356978   43161 command_runner.go:130] > NAME=Buildroot
	I0923 11:26:49.357000   43161 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 11:26:49.357007   43161 command_runner.go:130] > ID=buildroot
	I0923 11:26:49.357015   43161 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 11:26:49.357023   43161 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 11:26:49.357057   43161 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:26:49.357072   43161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 11:26:49.357147   43161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 11:26:49.357227   43161 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 11:26:49.357236   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /etc/ssl/certs/111392.pem
	I0923 11:26:49.357324   43161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:26:49.366730   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:26:49.390798   43161 start.go:296] duration metric: took 128.504928ms for postStartSetup
	I0923 11:26:49.390835   43161 fix.go:56] duration metric: took 1m31.658169753s for fixHost
	I0923 11:26:49.390854   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:26:49.393571   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.394016   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.394044   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.394199   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:26:49.394408   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.394606   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.394771   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:26:49.394936   43161 main.go:141] libmachine: Using SSH client type: native
	I0923 11:26:49.395130   43161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0923 11:26:49.395141   43161 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:26:49.502407   43161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727090809.479251002
	
	I0923 11:26:49.502435   43161 fix.go:216] guest clock: 1727090809.479251002
	I0923 11:26:49.502442   43161 fix.go:229] Guest: 2024-09-23 11:26:49.479251002 +0000 UTC Remote: 2024-09-23 11:26:49.390839845 +0000 UTC m=+91.782828835 (delta=88.411157ms)
	I0923 11:26:49.502488   43161 fix.go:200] guest clock delta is within tolerance: 88.411157ms
	I0923 11:26:49.502494   43161 start.go:83] releasing machines lock for "multinode-399279", held for 1m31.769838702s
	I0923 11:26:49.502515   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.502752   43161 main.go:141] libmachine: (multinode-399279) Calling .GetIP
	I0923 11:26:49.505199   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.505554   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.505579   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.505777   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.506264   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.506462   43161 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:26:49.506551   43161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:26:49.506598   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:26:49.506657   43161 ssh_runner.go:195] Run: cat /version.json
	I0923 11:26:49.506684   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:26:49.509049   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.509218   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.509420   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.509447   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.509598   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:26:49.509680   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:49.509704   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:49.509753   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.509866   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:26:49.509926   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:26:49.510009   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:26:49.510080   43161 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:26:49.510423   43161 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:26:49.510551   43161 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:26:49.613916   43161 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 11:26:49.614128   43161 ssh_runner.go:195] Run: systemctl --version
	I0923 11:26:49.640952   43161 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0923 11:26:49.641605   43161 command_runner.go:130] > systemd 252 (252)
	I0923 11:26:49.641653   43161 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 11:26:49.641716   43161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 11:26:49.802111   43161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:26:49.810204   43161 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 11:26:49.810470   43161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:26:49.810542   43161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:26:49.820337   43161 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:26:49.820362   43161 start.go:495] detecting cgroup driver to use...
	I0923 11:26:49.820416   43161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:26:49.837883   43161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:26:49.852495   43161 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:26:49.852570   43161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:26:49.867616   43161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:26:49.882264   43161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:26:50.044463   43161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:26:50.198999   43161 docker.go:233] disabling docker service ...
	I0923 11:26:50.199075   43161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:26:50.216859   43161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:26:50.230927   43161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:26:50.371252   43161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:26:50.513133   43161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:26:50.527234   43161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:26:50.548272   43161 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0923 11:26:50.548690   43161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 11:26:50.548744   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.559778   43161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 11:26:50.559839   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.571347   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.583042   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.593824   43161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:26:50.604652   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.615471   43161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.626063   43161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:26:50.636655   43161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:26:50.646298   43161 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0923 11:26:50.646368   43161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:26:50.656018   43161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:26:50.794419   43161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 11:26:51.532100   43161 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 11:26:51.532167   43161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 11:26:51.537187   43161 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0923 11:26:51.537214   43161 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 11:26:51.537220   43161 command_runner.go:130] > Device: 0,22	Inode: 1314        Links: 1
	I0923 11:26:51.537227   43161 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 11:26:51.537234   43161 command_runner.go:130] > Access: 2024-09-23 11:26:51.430312044 +0000
	I0923 11:26:51.537242   43161 command_runner.go:130] > Modify: 2024-09-23 11:26:51.414311712 +0000
	I0923 11:26:51.537268   43161 command_runner.go:130] > Change: 2024-09-23 11:26:51.414311712 +0000
	I0923 11:26:51.537278   43161 command_runner.go:130] >  Birth: -
	I0923 11:26:51.537299   43161 start.go:563] Will wait 60s for crictl version
	I0923 11:26:51.537337   43161 ssh_runner.go:195] Run: which crictl
	I0923 11:26:51.541043   43161 command_runner.go:130] > /usr/bin/crictl
	I0923 11:26:51.541120   43161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:26:51.580357   43161 command_runner.go:130] > Version:  0.1.0
	I0923 11:26:51.580485   43161 command_runner.go:130] > RuntimeName:  cri-o
	I0923 11:26:51.580512   43161 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0923 11:26:51.580661   43161 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 11:26:51.581924   43161 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 11:26:51.581983   43161 ssh_runner.go:195] Run: crio --version
	I0923 11:26:51.610968   43161 command_runner.go:130] > crio version 1.29.1
	I0923 11:26:51.610989   43161 command_runner.go:130] > Version:        1.29.1
	I0923 11:26:51.610996   43161 command_runner.go:130] > GitCommit:      unknown
	I0923 11:26:51.611000   43161 command_runner.go:130] > GitCommitDate:  unknown
	I0923 11:26:51.611004   43161 command_runner.go:130] > GitTreeState:   clean
	I0923 11:26:51.611011   43161 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0923 11:26:51.611015   43161 command_runner.go:130] > GoVersion:      go1.21.6
	I0923 11:26:51.611019   43161 command_runner.go:130] > Compiler:       gc
	I0923 11:26:51.611023   43161 command_runner.go:130] > Platform:       linux/amd64
	I0923 11:26:51.611027   43161 command_runner.go:130] > Linkmode:       dynamic
	I0923 11:26:51.611031   43161 command_runner.go:130] > BuildTags:      
	I0923 11:26:51.611043   43161 command_runner.go:130] >   containers_image_ostree_stub
	I0923 11:26:51.611053   43161 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0923 11:26:51.611059   43161 command_runner.go:130] >   btrfs_noversion
	I0923 11:26:51.611069   43161 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0923 11:26:51.611075   43161 command_runner.go:130] >   libdm_no_deferred_remove
	I0923 11:26:51.611083   43161 command_runner.go:130] >   seccomp
	I0923 11:26:51.611091   43161 command_runner.go:130] > LDFlags:          unknown
	I0923 11:26:51.611101   43161 command_runner.go:130] > SeccompEnabled:   true
	I0923 11:26:51.611106   43161 command_runner.go:130] > AppArmorEnabled:  false
	I0923 11:26:51.611191   43161 ssh_runner.go:195] Run: crio --version
	I0923 11:26:51.640366   43161 command_runner.go:130] > crio version 1.29.1
	I0923 11:26:51.640392   43161 command_runner.go:130] > Version:        1.29.1
	I0923 11:26:51.640399   43161 command_runner.go:130] > GitCommit:      unknown
	I0923 11:26:51.640411   43161 command_runner.go:130] > GitCommitDate:  unknown
	I0923 11:26:51.640418   43161 command_runner.go:130] > GitTreeState:   clean
	I0923 11:26:51.640429   43161 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0923 11:26:51.640434   43161 command_runner.go:130] > GoVersion:      go1.21.6
	I0923 11:26:51.640438   43161 command_runner.go:130] > Compiler:       gc
	I0923 11:26:51.640443   43161 command_runner.go:130] > Platform:       linux/amd64
	I0923 11:26:51.640448   43161 command_runner.go:130] > Linkmode:       dynamic
	I0923 11:26:51.640453   43161 command_runner.go:130] > BuildTags:      
	I0923 11:26:51.640458   43161 command_runner.go:130] >   containers_image_ostree_stub
	I0923 11:26:51.640462   43161 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0923 11:26:51.640466   43161 command_runner.go:130] >   btrfs_noversion
	I0923 11:26:51.640471   43161 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0923 11:26:51.640475   43161 command_runner.go:130] >   libdm_no_deferred_remove
	I0923 11:26:51.640479   43161 command_runner.go:130] >   seccomp
	I0923 11:26:51.640484   43161 command_runner.go:130] > LDFlags:          unknown
	I0923 11:26:51.640488   43161 command_runner.go:130] > SeccompEnabled:   true
	I0923 11:26:51.640494   43161 command_runner.go:130] > AppArmorEnabled:  false
	I0923 11:26:51.642610   43161 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 11:26:51.643704   43161 main.go:141] libmachine: (multinode-399279) Calling .GetIP
	I0923 11:26:51.646202   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:51.646532   43161 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:26:51.646559   43161 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:26:51.646701   43161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 11:26:51.650751   43161 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0923 11:26:51.650979   43161 kubeadm.go:883] updating cluster {Name:multinode-399279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-399279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:26:51.651107   43161 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:26:51.651163   43161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:26:51.695073   43161 command_runner.go:130] > {
	I0923 11:26:51.695093   43161 command_runner.go:130] >   "images": [
	I0923 11:26:51.695100   43161 command_runner.go:130] >     {
	I0923 11:26:51.695107   43161 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0923 11:26:51.695112   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695118   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0923 11:26:51.695123   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695127   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695137   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0923 11:26:51.695144   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0923 11:26:51.695148   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695152   43161 command_runner.go:130] >       "size": "87190579",
	I0923 11:26:51.695156   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695160   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695165   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695175   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695179   43161 command_runner.go:130] >     },
	I0923 11:26:51.695182   43161 command_runner.go:130] >     {
	I0923 11:26:51.695190   43161 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0923 11:26:51.695196   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695205   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0923 11:26:51.695211   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695217   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695229   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0923 11:26:51.695246   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0923 11:26:51.695251   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695257   43161 command_runner.go:130] >       "size": "1363676",
	I0923 11:26:51.695263   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695274   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695281   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695286   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695294   43161 command_runner.go:130] >     },
	I0923 11:26:51.695299   43161 command_runner.go:130] >     {
	I0923 11:26:51.695310   43161 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0923 11:26:51.695319   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695330   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0923 11:26:51.695339   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695344   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695358   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0923 11:26:51.695373   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0923 11:26:51.695381   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695388   43161 command_runner.go:130] >       "size": "31470524",
	I0923 11:26:51.695396   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695400   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695404   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695407   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695413   43161 command_runner.go:130] >     },
	I0923 11:26:51.695418   43161 command_runner.go:130] >     {
	I0923 11:26:51.695426   43161 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0923 11:26:51.695432   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695437   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0923 11:26:51.695443   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695446   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695462   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0923 11:26:51.695478   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0923 11:26:51.695484   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695488   43161 command_runner.go:130] >       "size": "63273227",
	I0923 11:26:51.695494   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695499   43161 command_runner.go:130] >       "username": "nonroot",
	I0923 11:26:51.695505   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695509   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695515   43161 command_runner.go:130] >     },
	I0923 11:26:51.695519   43161 command_runner.go:130] >     {
	I0923 11:26:51.695528   43161 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0923 11:26:51.695536   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695543   43161 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0923 11:26:51.695547   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695552   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695559   43161 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0923 11:26:51.695567   43161 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0923 11:26:51.695571   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695577   43161 command_runner.go:130] >       "size": "149009664",
	I0923 11:26:51.695581   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.695585   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.695590   43161 command_runner.go:130] >       },
	I0923 11:26:51.695594   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695601   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695605   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695611   43161 command_runner.go:130] >     },
	I0923 11:26:51.695614   43161 command_runner.go:130] >     {
	I0923 11:26:51.695622   43161 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0923 11:26:51.695627   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695632   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0923 11:26:51.695637   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695641   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695650   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0923 11:26:51.695657   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0923 11:26:51.695663   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695667   43161 command_runner.go:130] >       "size": "95237600",
	I0923 11:26:51.695673   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.695677   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.695683   43161 command_runner.go:130] >       },
	I0923 11:26:51.695686   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695692   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695696   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695702   43161 command_runner.go:130] >     },
	I0923 11:26:51.695705   43161 command_runner.go:130] >     {
	I0923 11:26:51.695713   43161 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0923 11:26:51.695718   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695723   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0923 11:26:51.695729   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695732   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695739   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0923 11:26:51.695749   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0923 11:26:51.695754   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695759   43161 command_runner.go:130] >       "size": "89437508",
	I0923 11:26:51.695764   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.695768   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.695773   43161 command_runner.go:130] >       },
	I0923 11:26:51.695777   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695783   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695787   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695793   43161 command_runner.go:130] >     },
	I0923 11:26:51.695796   43161 command_runner.go:130] >     {
	I0923 11:26:51.695803   43161 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0923 11:26:51.695810   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695815   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0923 11:26:51.695820   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695824   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695839   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0923 11:26:51.695849   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0923 11:26:51.695854   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695858   43161 command_runner.go:130] >       "size": "92733849",
	I0923 11:26:51.695864   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.695868   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695873   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695878   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695881   43161 command_runner.go:130] >     },
	I0923 11:26:51.695884   43161 command_runner.go:130] >     {
	I0923 11:26:51.695890   43161 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0923 11:26:51.695893   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695898   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0923 11:26:51.695901   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695904   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695911   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0923 11:26:51.695918   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0923 11:26:51.695921   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695925   43161 command_runner.go:130] >       "size": "68420934",
	I0923 11:26:51.695928   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.695932   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.695935   43161 command_runner.go:130] >       },
	I0923 11:26:51.695938   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.695942   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.695945   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.695948   43161 command_runner.go:130] >     },
	I0923 11:26:51.695951   43161 command_runner.go:130] >     {
	I0923 11:26:51.695956   43161 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0923 11:26:51.695965   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.695970   43161 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0923 11:26:51.695973   43161 command_runner.go:130] >       ],
	I0923 11:26:51.695977   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.695983   43161 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0923 11:26:51.695989   43161 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0923 11:26:51.695996   43161 command_runner.go:130] >       ],
	I0923 11:26:51.696000   43161 command_runner.go:130] >       "size": "742080",
	I0923 11:26:51.696006   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.696010   43161 command_runner.go:130] >         "value": "65535"
	I0923 11:26:51.696015   43161 command_runner.go:130] >       },
	I0923 11:26:51.696019   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.696026   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.696029   43161 command_runner.go:130] >       "pinned": true
	I0923 11:26:51.696035   43161 command_runner.go:130] >     }
	I0923 11:26:51.696038   43161 command_runner.go:130] >   ]
	I0923 11:26:51.696041   43161 command_runner.go:130] > }
	I0923 11:26:51.696197   43161 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:26:51.696208   43161 crio.go:433] Images already preloaded, skipping extraction
	I0923 11:26:51.696249   43161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:26:51.730298   43161 command_runner.go:130] > {
	I0923 11:26:51.730326   43161 command_runner.go:130] >   "images": [
	I0923 11:26:51.730332   43161 command_runner.go:130] >     {
	I0923 11:26:51.730345   43161 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0923 11:26:51.730352   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730361   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0923 11:26:51.730367   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730373   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730386   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0923 11:26:51.730401   43161 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0923 11:26:51.730408   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730414   43161 command_runner.go:130] >       "size": "87190579",
	I0923 11:26:51.730421   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.730430   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.730443   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.730456   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.730463   43161 command_runner.go:130] >     },
	I0923 11:26:51.730469   43161 command_runner.go:130] >     {
	I0923 11:26:51.730478   43161 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0923 11:26:51.730488   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730497   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0923 11:26:51.730506   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730512   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730525   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0923 11:26:51.730535   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0923 11:26:51.730541   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730551   43161 command_runner.go:130] >       "size": "1363676",
	I0923 11:26:51.730558   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.730570   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.730579   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.730587   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.730592   43161 command_runner.go:130] >     },
	I0923 11:26:51.730596   43161 command_runner.go:130] >     {
	I0923 11:26:51.730602   43161 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0923 11:26:51.730608   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730614   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0923 11:26:51.730619   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730625   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730639   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0923 11:26:51.730654   43161 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0923 11:26:51.730663   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730669   43161 command_runner.go:130] >       "size": "31470524",
	I0923 11:26:51.730677   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.730686   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.730695   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.730707   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.730717   43161 command_runner.go:130] >     },
	I0923 11:26:51.730723   43161 command_runner.go:130] >     {
	I0923 11:26:51.730735   43161 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0923 11:26:51.730745   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730754   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0923 11:26:51.730763   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730773   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730788   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0923 11:26:51.730806   43161 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0923 11:26:51.730815   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730825   43161 command_runner.go:130] >       "size": "63273227",
	I0923 11:26:51.730833   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.730843   43161 command_runner.go:130] >       "username": "nonroot",
	I0923 11:26:51.730851   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.730856   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.730863   43161 command_runner.go:130] >     },
	I0923 11:26:51.730871   43161 command_runner.go:130] >     {
	I0923 11:26:51.730881   43161 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0923 11:26:51.730890   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.730899   43161 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0923 11:26:51.730907   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730917   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.730931   43161 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0923 11:26:51.730945   43161 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0923 11:26:51.730954   43161 command_runner.go:130] >       ],
	I0923 11:26:51.730964   43161 command_runner.go:130] >       "size": "149009664",
	I0923 11:26:51.730973   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.730981   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.730988   43161 command_runner.go:130] >       },
	I0923 11:26:51.730991   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.730995   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731001   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731005   43161 command_runner.go:130] >     },
	I0923 11:26:51.731012   43161 command_runner.go:130] >     {
	I0923 11:26:51.731020   43161 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0923 11:26:51.731026   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731031   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0923 11:26:51.731036   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731040   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731049   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0923 11:26:51.731058   43161 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0923 11:26:51.731063   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731067   43161 command_runner.go:130] >       "size": "95237600",
	I0923 11:26:51.731073   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.731077   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.731083   43161 command_runner.go:130] >       },
	I0923 11:26:51.731087   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731093   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731097   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731101   43161 command_runner.go:130] >     },
	I0923 11:26:51.731105   43161 command_runner.go:130] >     {
	I0923 11:26:51.731113   43161 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0923 11:26:51.731117   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731122   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0923 11:26:51.731125   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731129   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731139   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0923 11:26:51.731148   43161 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0923 11:26:51.731153   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731157   43161 command_runner.go:130] >       "size": "89437508",
	I0923 11:26:51.731163   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.731167   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.731173   43161 command_runner.go:130] >       },
	I0923 11:26:51.731177   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731182   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731185   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731190   43161 command_runner.go:130] >     },
	I0923 11:26:51.731193   43161 command_runner.go:130] >     {
	I0923 11:26:51.731199   43161 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0923 11:26:51.731206   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731211   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0923 11:26:51.731216   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731220   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731233   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0923 11:26:51.731245   43161 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0923 11:26:51.731249   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731253   43161 command_runner.go:130] >       "size": "92733849",
	I0923 11:26:51.731256   43161 command_runner.go:130] >       "uid": null,
	I0923 11:26:51.731259   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731263   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731266   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731269   43161 command_runner.go:130] >     },
	I0923 11:26:51.731272   43161 command_runner.go:130] >     {
	I0923 11:26:51.731278   43161 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0923 11:26:51.731282   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731286   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0923 11:26:51.731289   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731293   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731299   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0923 11:26:51.731306   43161 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0923 11:26:51.731309   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731313   43161 command_runner.go:130] >       "size": "68420934",
	I0923 11:26:51.731317   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.731320   43161 command_runner.go:130] >         "value": "0"
	I0923 11:26:51.731324   43161 command_runner.go:130] >       },
	I0923 11:26:51.731327   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731330   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731334   43161 command_runner.go:130] >       "pinned": false
	I0923 11:26:51.731337   43161 command_runner.go:130] >     },
	I0923 11:26:51.731342   43161 command_runner.go:130] >     {
	I0923 11:26:51.731348   43161 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0923 11:26:51.731352   43161 command_runner.go:130] >       "repoTags": [
	I0923 11:26:51.731356   43161 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0923 11:26:51.731359   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731362   43161 command_runner.go:130] >       "repoDigests": [
	I0923 11:26:51.731369   43161 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0923 11:26:51.731410   43161 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0923 11:26:51.731421   43161 command_runner.go:130] >       ],
	I0923 11:26:51.731426   43161 command_runner.go:130] >       "size": "742080",
	I0923 11:26:51.731429   43161 command_runner.go:130] >       "uid": {
	I0923 11:26:51.731433   43161 command_runner.go:130] >         "value": "65535"
	I0923 11:26:51.731438   43161 command_runner.go:130] >       },
	I0923 11:26:51.731442   43161 command_runner.go:130] >       "username": "",
	I0923 11:26:51.731448   43161 command_runner.go:130] >       "spec": null,
	I0923 11:26:51.731456   43161 command_runner.go:130] >       "pinned": true
	I0923 11:26:51.731462   43161 command_runner.go:130] >     }
	I0923 11:26:51.731465   43161 command_runner.go:130] >   ]
	I0923 11:26:51.731468   43161 command_runner.go:130] > }
	I0923 11:26:51.731584   43161 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:26:51.731594   43161 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:26:51.731601   43161 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.31.1 crio true true} ...
	I0923 11:26:51.731689   43161 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-399279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-399279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:26:51.731759   43161 ssh_runner.go:195] Run: crio config
	I0923 11:26:51.764296   43161 command_runner.go:130] ! time="2024-09-23 11:26:51.741091425Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0923 11:26:51.770652   43161 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0923 11:26:51.777424   43161 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0923 11:26:51.777458   43161 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0923 11:26:51.777469   43161 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0923 11:26:51.777474   43161 command_runner.go:130] > #
	I0923 11:26:51.777484   43161 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0923 11:26:51.777497   43161 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0923 11:26:51.777506   43161 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0923 11:26:51.777519   43161 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0923 11:26:51.777526   43161 command_runner.go:130] > # reload'.
	I0923 11:26:51.777537   43161 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0923 11:26:51.777551   43161 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0923 11:26:51.777561   43161 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0923 11:26:51.777571   43161 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0923 11:26:51.777580   43161 command_runner.go:130] > [crio]
	I0923 11:26:51.777589   43161 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0923 11:26:51.777600   43161 command_runner.go:130] > # containers images, in this directory.
	I0923 11:26:51.777607   43161 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0923 11:26:51.777625   43161 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0923 11:26:51.777633   43161 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0923 11:26:51.777644   43161 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0923 11:26:51.777652   43161 command_runner.go:130] > # imagestore = ""
	I0923 11:26:51.777661   43161 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0923 11:26:51.777670   43161 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0923 11:26:51.777680   43161 command_runner.go:130] > storage_driver = "overlay"
	I0923 11:26:51.777689   43161 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0923 11:26:51.777700   43161 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0923 11:26:51.777708   43161 command_runner.go:130] > storage_option = [
	I0923 11:26:51.777715   43161 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0923 11:26:51.777723   43161 command_runner.go:130] > ]
	I0923 11:26:51.777732   43161 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0923 11:26:51.777745   43161 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0923 11:26:51.777754   43161 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0923 11:26:51.777766   43161 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0923 11:26:51.777778   43161 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0923 11:26:51.777788   43161 command_runner.go:130] > # always happen on a node reboot
	I0923 11:26:51.777798   43161 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0923 11:26:51.777814   43161 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0923 11:26:51.777826   43161 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0923 11:26:51.777837   43161 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0923 11:26:51.777848   43161 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0923 11:26:51.777862   43161 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0923 11:26:51.777878   43161 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0923 11:26:51.777887   43161 command_runner.go:130] > # internal_wipe = true
	I0923 11:26:51.777898   43161 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0923 11:26:51.777903   43161 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0923 11:26:51.777909   43161 command_runner.go:130] > # internal_repair = false
	I0923 11:26:51.777917   43161 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0923 11:26:51.777924   43161 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0923 11:26:51.777930   43161 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0923 11:26:51.777937   43161 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0923 11:26:51.777944   43161 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0923 11:26:51.777950   43161 command_runner.go:130] > [crio.api]
	I0923 11:26:51.777955   43161 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0923 11:26:51.777961   43161 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0923 11:26:51.777966   43161 command_runner.go:130] > # IP address on which the stream server will listen.
	I0923 11:26:51.777973   43161 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0923 11:26:51.777979   43161 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0923 11:26:51.777986   43161 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0923 11:26:51.777990   43161 command_runner.go:130] > # stream_port = "0"
	I0923 11:26:51.777997   43161 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0923 11:26:51.778001   43161 command_runner.go:130] > # stream_enable_tls = false
	I0923 11:26:51.778008   43161 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0923 11:26:51.778013   43161 command_runner.go:130] > # stream_idle_timeout = ""
	I0923 11:26:51.778023   43161 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0923 11:26:51.778031   43161 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0923 11:26:51.778036   43161 command_runner.go:130] > # minutes.
	I0923 11:26:51.778041   43161 command_runner.go:130] > # stream_tls_cert = ""
	I0923 11:26:51.778048   43161 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0923 11:26:51.778055   43161 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0923 11:26:51.778059   43161 command_runner.go:130] > # stream_tls_key = ""
	I0923 11:26:51.778067   43161 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0923 11:26:51.778075   43161 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0923 11:26:51.778087   43161 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0923 11:26:51.778094   43161 command_runner.go:130] > # stream_tls_ca = ""
	I0923 11:26:51.778101   43161 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0923 11:26:51.778109   43161 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0923 11:26:51.778119   43161 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0923 11:26:51.778125   43161 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0923 11:26:51.778131   43161 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0923 11:26:51.778138   43161 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0923 11:26:51.778142   43161 command_runner.go:130] > [crio.runtime]
	I0923 11:26:51.778147   43161 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0923 11:26:51.778154   43161 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0923 11:26:51.778158   43161 command_runner.go:130] > # "nofile=1024:2048"
	I0923 11:26:51.778166   43161 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0923 11:26:51.778172   43161 command_runner.go:130] > # default_ulimits = [
	I0923 11:26:51.778175   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778182   43161 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0923 11:26:51.778186   43161 command_runner.go:130] > # no_pivot = false
	I0923 11:26:51.778193   43161 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0923 11:26:51.778199   43161 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0923 11:26:51.778208   43161 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0923 11:26:51.778215   43161 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0923 11:26:51.778222   43161 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0923 11:26:51.778229   43161 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0923 11:26:51.778236   43161 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0923 11:26:51.778240   43161 command_runner.go:130] > # Cgroup setting for conmon
	I0923 11:26:51.778248   43161 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0923 11:26:51.778254   43161 command_runner.go:130] > conmon_cgroup = "pod"
	I0923 11:26:51.778260   43161 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0923 11:26:51.778266   43161 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0923 11:26:51.778272   43161 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0923 11:26:51.778279   43161 command_runner.go:130] > conmon_env = [
	I0923 11:26:51.778284   43161 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0923 11:26:51.778290   43161 command_runner.go:130] > ]
	I0923 11:26:51.778295   43161 command_runner.go:130] > # Additional environment variables to set for all the
	I0923 11:26:51.778302   43161 command_runner.go:130] > # containers. These are overridden if set in the
	I0923 11:26:51.778307   43161 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0923 11:26:51.778314   43161 command_runner.go:130] > # default_env = [
	I0923 11:26:51.778317   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778325   43161 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0923 11:26:51.778332   43161 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0923 11:26:51.778338   43161 command_runner.go:130] > # selinux = false
	I0923 11:26:51.778345   43161 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0923 11:26:51.778352   43161 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0923 11:26:51.778360   43161 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0923 11:26:51.778363   43161 command_runner.go:130] > # seccomp_profile = ""
	I0923 11:26:51.778371   43161 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0923 11:26:51.778376   43161 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0923 11:26:51.778384   43161 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0923 11:26:51.778388   43161 command_runner.go:130] > # which might increase security.
	I0923 11:26:51.778394   43161 command_runner.go:130] > # This option is currently deprecated,
	I0923 11:26:51.778400   43161 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0923 11:26:51.778406   43161 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0923 11:26:51.778412   43161 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0923 11:26:51.778419   43161 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0923 11:26:51.778426   43161 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0923 11:26:51.778433   43161 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0923 11:26:51.778439   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.778444   43161 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0923 11:26:51.778455   43161 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0923 11:26:51.778462   43161 command_runner.go:130] > # the cgroup blockio controller.
	I0923 11:26:51.778466   43161 command_runner.go:130] > # blockio_config_file = ""
	I0923 11:26:51.778475   43161 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0923 11:26:51.778479   43161 command_runner.go:130] > # blockio parameters.
	I0923 11:26:51.778483   43161 command_runner.go:130] > # blockio_reload = false
	I0923 11:26:51.778490   43161 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0923 11:26:51.778496   43161 command_runner.go:130] > # irqbalance daemon.
	I0923 11:26:51.778501   43161 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0923 11:26:51.778509   43161 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0923 11:26:51.778517   43161 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0923 11:26:51.778526   43161 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0923 11:26:51.778533   43161 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0923 11:26:51.778542   43161 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0923 11:26:51.778549   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.778552   43161 command_runner.go:130] > # rdt_config_file = ""
	I0923 11:26:51.778559   43161 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0923 11:26:51.778563   43161 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0923 11:26:51.778581   43161 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0923 11:26:51.778587   43161 command_runner.go:130] > # separate_pull_cgroup = ""
	I0923 11:26:51.778593   43161 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0923 11:26:51.778601   43161 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0923 11:26:51.778605   43161 command_runner.go:130] > # will be added.
	I0923 11:26:51.778609   43161 command_runner.go:130] > # default_capabilities = [
	I0923 11:26:51.778615   43161 command_runner.go:130] > # 	"CHOWN",
	I0923 11:26:51.778619   43161 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0923 11:26:51.778624   43161 command_runner.go:130] > # 	"FSETID",
	I0923 11:26:51.778628   43161 command_runner.go:130] > # 	"FOWNER",
	I0923 11:26:51.778634   43161 command_runner.go:130] > # 	"SETGID",
	I0923 11:26:51.778638   43161 command_runner.go:130] > # 	"SETUID",
	I0923 11:26:51.778644   43161 command_runner.go:130] > # 	"SETPCAP",
	I0923 11:26:51.778648   43161 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0923 11:26:51.778654   43161 command_runner.go:130] > # 	"KILL",
	I0923 11:26:51.778657   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778666   43161 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0923 11:26:51.778675   43161 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0923 11:26:51.778679   43161 command_runner.go:130] > # add_inheritable_capabilities = false
	I0923 11:26:51.778687   43161 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0923 11:26:51.778693   43161 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0923 11:26:51.778699   43161 command_runner.go:130] > default_sysctls = [
	I0923 11:26:51.778704   43161 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0923 11:26:51.778709   43161 command_runner.go:130] > ]
	I0923 11:26:51.778713   43161 command_runner.go:130] > # List of devices on the host that a
	I0923 11:26:51.778721   43161 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0923 11:26:51.778730   43161 command_runner.go:130] > # allowed_devices = [
	I0923 11:26:51.778736   43161 command_runner.go:130] > # 	"/dev/fuse",
	I0923 11:26:51.778744   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778751   43161 command_runner.go:130] > # List of additional devices. specified as
	I0923 11:26:51.778764   43161 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0923 11:26:51.778775   43161 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0923 11:26:51.778784   43161 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0923 11:26:51.778792   43161 command_runner.go:130] > # additional_devices = [
	I0923 11:26:51.778798   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778807   43161 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0923 11:26:51.778815   43161 command_runner.go:130] > # cdi_spec_dirs = [
	I0923 11:26:51.778821   43161 command_runner.go:130] > # 	"/etc/cdi",
	I0923 11:26:51.778825   43161 command_runner.go:130] > # 	"/var/run/cdi",
	I0923 11:26:51.778830   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778837   43161 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0923 11:26:51.778845   43161 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0923 11:26:51.778851   43161 command_runner.go:130] > # Defaults to false.
	I0923 11:26:51.778857   43161 command_runner.go:130] > # device_ownership_from_security_context = false
	I0923 11:26:51.778865   43161 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0923 11:26:51.778873   43161 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0923 11:26:51.778878   43161 command_runner.go:130] > # hooks_dir = [
	I0923 11:26:51.778883   43161 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0923 11:26:51.778888   43161 command_runner.go:130] > # ]
	I0923 11:26:51.778893   43161 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0923 11:26:51.778901   43161 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0923 11:26:51.778908   43161 command_runner.go:130] > # its default mounts from the following two files:
	I0923 11:26:51.778911   43161 command_runner.go:130] > #
	I0923 11:26:51.778917   43161 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0923 11:26:51.778925   43161 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0923 11:26:51.778933   43161 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0923 11:26:51.778936   43161 command_runner.go:130] > #
	I0923 11:26:51.778941   43161 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0923 11:26:51.778949   43161 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0923 11:26:51.778965   43161 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0923 11:26:51.778972   43161 command_runner.go:130] > #      only add mounts it finds in this file.
	I0923 11:26:51.778978   43161 command_runner.go:130] > #
	I0923 11:26:51.778982   43161 command_runner.go:130] > # default_mounts_file = ""
	I0923 11:26:51.778989   43161 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0923 11:26:51.778995   43161 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0923 11:26:51.779001   43161 command_runner.go:130] > pids_limit = 1024
	I0923 11:26:51.779007   43161 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0923 11:26:51.779015   43161 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0923 11:26:51.779021   43161 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0923 11:26:51.779031   43161 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0923 11:26:51.779037   43161 command_runner.go:130] > # log_size_max = -1
	I0923 11:26:51.779043   43161 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0923 11:26:51.779049   43161 command_runner.go:130] > # log_to_journald = false
	I0923 11:26:51.779055   43161 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0923 11:26:51.779062   43161 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0923 11:26:51.779067   43161 command_runner.go:130] > # Path to directory for container attach sockets.
	I0923 11:26:51.779074   43161 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0923 11:26:51.779079   43161 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0923 11:26:51.779085   43161 command_runner.go:130] > # bind_mount_prefix = ""
	I0923 11:26:51.779090   43161 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0923 11:26:51.779096   43161 command_runner.go:130] > # read_only = false
	I0923 11:26:51.779102   43161 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0923 11:26:51.779110   43161 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0923 11:26:51.779116   43161 command_runner.go:130] > # live configuration reload.
	I0923 11:26:51.779120   43161 command_runner.go:130] > # log_level = "info"
	I0923 11:26:51.779127   43161 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0923 11:26:51.779132   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.779138   43161 command_runner.go:130] > # log_filter = ""
	I0923 11:26:51.779143   43161 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0923 11:26:51.779152   43161 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0923 11:26:51.779158   43161 command_runner.go:130] > # separated by comma.
	I0923 11:26:51.779165   43161 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 11:26:51.779171   43161 command_runner.go:130] > # uid_mappings = ""
	I0923 11:26:51.779177   43161 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0923 11:26:51.779185   43161 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0923 11:26:51.779189   43161 command_runner.go:130] > # separated by comma.
	I0923 11:26:51.779199   43161 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 11:26:51.779206   43161 command_runner.go:130] > # gid_mappings = ""
	I0923 11:26:51.779215   43161 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0923 11:26:51.779223   43161 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0923 11:26:51.779230   43161 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0923 11:26:51.779239   43161 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 11:26:51.779245   43161 command_runner.go:130] > # minimum_mappable_uid = -1
	I0923 11:26:51.779251   43161 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0923 11:26:51.779259   43161 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0923 11:26:51.779267   43161 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0923 11:26:51.779276   43161 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0923 11:26:51.779283   43161 command_runner.go:130] > # minimum_mappable_gid = -1
	I0923 11:26:51.779291   43161 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0923 11:26:51.779298   43161 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0923 11:26:51.779305   43161 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0923 11:26:51.779310   43161 command_runner.go:130] > # ctr_stop_timeout = 30
	I0923 11:26:51.779316   43161 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0923 11:26:51.779324   43161 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0923 11:26:51.779329   43161 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0923 11:26:51.779335   43161 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0923 11:26:51.779339   43161 command_runner.go:130] > drop_infra_ctr = false
	I0923 11:26:51.779347   43161 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0923 11:26:51.779352   43161 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0923 11:26:51.779361   43161 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0923 11:26:51.779368   43161 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0923 11:26:51.779374   43161 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0923 11:26:51.779382   43161 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0923 11:26:51.779390   43161 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0923 11:26:51.779395   43161 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0923 11:26:51.779401   43161 command_runner.go:130] > # shared_cpuset = ""
	I0923 11:26:51.779407   43161 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0923 11:26:51.779414   43161 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0923 11:26:51.779418   43161 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0923 11:26:51.779425   43161 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0923 11:26:51.779430   43161 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0923 11:26:51.779435   43161 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0923 11:26:51.779443   43161 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0923 11:26:51.779448   43161 command_runner.go:130] > # enable_criu_support = false
	I0923 11:26:51.779457   43161 command_runner.go:130] > # Enable/disable the generation of the container,
	I0923 11:26:51.779464   43161 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0923 11:26:51.779471   43161 command_runner.go:130] > # enable_pod_events = false
	I0923 11:26:51.779477   43161 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0923 11:26:51.779485   43161 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0923 11:26:51.779492   43161 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0923 11:26:51.779496   43161 command_runner.go:130] > # default_runtime = "runc"
	I0923 11:26:51.779503   43161 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0923 11:26:51.779510   43161 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0923 11:26:51.779520   43161 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0923 11:26:51.779527   43161 command_runner.go:130] > # creation as a file is not desired either.
	I0923 11:26:51.779536   43161 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0923 11:26:51.779551   43161 command_runner.go:130] > # the hostname is being managed dynamically.
	I0923 11:26:51.779557   43161 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0923 11:26:51.779560   43161 command_runner.go:130] > # ]
	I0923 11:26:51.779567   43161 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0923 11:26:51.779575   43161 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0923 11:26:51.779583   43161 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0923 11:26:51.779591   43161 command_runner.go:130] > # Each entry in the table should follow the format:
	I0923 11:26:51.779594   43161 command_runner.go:130] > #
	I0923 11:26:51.779598   43161 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0923 11:26:51.779605   43161 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0923 11:26:51.779623   43161 command_runner.go:130] > # runtime_type = "oci"
	I0923 11:26:51.779629   43161 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0923 11:26:51.779635   43161 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0923 11:26:51.779641   43161 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0923 11:26:51.779646   43161 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0923 11:26:51.779652   43161 command_runner.go:130] > # monitor_env = []
	I0923 11:26:51.779657   43161 command_runner.go:130] > # privileged_without_host_devices = false
	I0923 11:26:51.779663   43161 command_runner.go:130] > # allowed_annotations = []
	I0923 11:26:51.779668   43161 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0923 11:26:51.779675   43161 command_runner.go:130] > # Where:
	I0923 11:26:51.779680   43161 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0923 11:26:51.779688   43161 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0923 11:26:51.779694   43161 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0923 11:26:51.779702   43161 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0923 11:26:51.779707   43161 command_runner.go:130] > #   in $PATH.
	I0923 11:26:51.779713   43161 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0923 11:26:51.779719   43161 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0923 11:26:51.779726   43161 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0923 11:26:51.779735   43161 command_runner.go:130] > #   state.
	I0923 11:26:51.779745   43161 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0923 11:26:51.779757   43161 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0923 11:26:51.779770   43161 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0923 11:26:51.779779   43161 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0923 11:26:51.779790   43161 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0923 11:26:51.779803   43161 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0923 11:26:51.779813   43161 command_runner.go:130] > #   The currently recognized values are:
	I0923 11:26:51.779823   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0923 11:26:51.779835   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0923 11:26:51.779847   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0923 11:26:51.779859   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0923 11:26:51.779872   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0923 11:26:51.779881   43161 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0923 11:26:51.779890   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0923 11:26:51.779898   43161 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0923 11:26:51.779904   43161 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0923 11:26:51.779913   43161 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0923 11:26:51.779920   43161 command_runner.go:130] > #   deprecated option "conmon".
	I0923 11:26:51.779926   43161 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0923 11:26:51.779933   43161 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0923 11:26:51.779940   43161 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0923 11:26:51.779947   43161 command_runner.go:130] > #   should be moved to the container's cgroup
	I0923 11:26:51.779953   43161 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0923 11:26:51.779960   43161 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0923 11:26:51.779966   43161 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0923 11:26:51.779973   43161 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0923 11:26:51.779977   43161 command_runner.go:130] > #
	I0923 11:26:51.779983   43161 command_runner.go:130] > # Using the seccomp notifier feature:
	I0923 11:26:51.779987   43161 command_runner.go:130] > #
	I0923 11:26:51.779994   43161 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0923 11:26:51.780002   43161 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0923 11:26:51.780009   43161 command_runner.go:130] > #
	I0923 11:26:51.780015   43161 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0923 11:26:51.780023   43161 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0923 11:26:51.780026   43161 command_runner.go:130] > #
	I0923 11:26:51.780034   43161 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0923 11:26:51.780040   43161 command_runner.go:130] > # feature.
	I0923 11:26:51.780043   43161 command_runner.go:130] > #
	I0923 11:26:51.780049   43161 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0923 11:26:51.780057   43161 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0923 11:26:51.780063   43161 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0923 11:26:51.780071   43161 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0923 11:26:51.780078   43161 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0923 11:26:51.780082   43161 command_runner.go:130] > #
	I0923 11:26:51.780090   43161 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0923 11:26:51.780097   43161 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0923 11:26:51.780102   43161 command_runner.go:130] > #
	I0923 11:26:51.780108   43161 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0923 11:26:51.780113   43161 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0923 11:26:51.780119   43161 command_runner.go:130] > #
	I0923 11:26:51.780125   43161 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0923 11:26:51.780133   43161 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0923 11:26:51.780138   43161 command_runner.go:130] > # limitation.
	I0923 11:26:51.780143   43161 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0923 11:26:51.780149   43161 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0923 11:26:51.780156   43161 command_runner.go:130] > runtime_type = "oci"
	I0923 11:26:51.780162   43161 command_runner.go:130] > runtime_root = "/run/runc"
	I0923 11:26:51.780167   43161 command_runner.go:130] > runtime_config_path = ""
	I0923 11:26:51.780173   43161 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0923 11:26:51.780177   43161 command_runner.go:130] > monitor_cgroup = "pod"
	I0923 11:26:51.780181   43161 command_runner.go:130] > monitor_exec_cgroup = ""
	I0923 11:26:51.780187   43161 command_runner.go:130] > monitor_env = [
	I0923 11:26:51.780193   43161 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0923 11:26:51.780198   43161 command_runner.go:130] > ]
	I0923 11:26:51.780204   43161 command_runner.go:130] > privileged_without_host_devices = false
	I0923 11:26:51.780212   43161 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0923 11:26:51.780219   43161 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0923 11:26:51.780225   43161 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0923 11:26:51.780234   43161 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0923 11:26:51.780243   43161 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0923 11:26:51.780251   43161 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0923 11:26:51.780260   43161 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0923 11:26:51.780269   43161 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0923 11:26:51.780277   43161 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0923 11:26:51.780284   43161 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0923 11:26:51.780290   43161 command_runner.go:130] > # Example:
	I0923 11:26:51.780294   43161 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0923 11:26:51.780301   43161 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0923 11:26:51.780306   43161 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0923 11:26:51.780313   43161 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0923 11:26:51.780316   43161 command_runner.go:130] > # cpuset = 0
	I0923 11:26:51.780320   43161 command_runner.go:130] > # cpushares = "0-1"
	I0923 11:26:51.780325   43161 command_runner.go:130] > # Where:
	I0923 11:26:51.780330   43161 command_runner.go:130] > # The workload name is workload-type.
	I0923 11:26:51.780338   43161 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0923 11:26:51.780345   43161 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0923 11:26:51.780350   43161 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0923 11:26:51.780360   43161 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0923 11:26:51.780367   43161 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0923 11:26:51.780372   43161 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0923 11:26:51.780381   43161 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0923 11:26:51.780387   43161 command_runner.go:130] > # Default value is set to true
	I0923 11:26:51.780391   43161 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0923 11:26:51.780397   43161 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0923 11:26:51.780404   43161 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0923 11:26:51.780408   43161 command_runner.go:130] > # Default value is set to 'false'
	I0923 11:26:51.780414   43161 command_runner.go:130] > # disable_hostport_mapping = false
	I0923 11:26:51.780421   43161 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0923 11:26:51.780425   43161 command_runner.go:130] > #
	I0923 11:26:51.780430   43161 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0923 11:26:51.780436   43161 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0923 11:26:51.780441   43161 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0923 11:26:51.780447   43161 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0923 11:26:51.780457   43161 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0923 11:26:51.780463   43161 command_runner.go:130] > [crio.image]
	I0923 11:26:51.780472   43161 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0923 11:26:51.780479   43161 command_runner.go:130] > # default_transport = "docker://"
	I0923 11:26:51.780488   43161 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0923 11:26:51.780498   43161 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0923 11:26:51.780504   43161 command_runner.go:130] > # global_auth_file = ""
	I0923 11:26:51.780511   43161 command_runner.go:130] > # The image used to instantiate infra containers.
	I0923 11:26:51.780518   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.780525   43161 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0923 11:26:51.780535   43161 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0923 11:26:51.780544   43161 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0923 11:26:51.780553   43161 command_runner.go:130] > # This option supports live configuration reload.
	I0923 11:26:51.780559   43161 command_runner.go:130] > # pause_image_auth_file = ""
	I0923 11:26:51.780565   43161 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0923 11:26:51.780570   43161 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0923 11:26:51.780576   43161 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0923 11:26:51.780581   43161 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0923 11:26:51.780586   43161 command_runner.go:130] > # pause_command = "/pause"
	I0923 11:26:51.780595   43161 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0923 11:26:51.780604   43161 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0923 11:26:51.780616   43161 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0923 11:26:51.780631   43161 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0923 11:26:51.780643   43161 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0923 11:26:51.780655   43161 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0923 11:26:51.780664   43161 command_runner.go:130] > # pinned_images = [
	I0923 11:26:51.780672   43161 command_runner.go:130] > # ]
	I0923 11:26:51.780682   43161 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0923 11:26:51.780695   43161 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0923 11:26:51.780704   43161 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0923 11:26:51.780714   43161 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0923 11:26:51.780725   43161 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0923 11:26:51.780734   43161 command_runner.go:130] > # signature_policy = ""
	I0923 11:26:51.780743   43161 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0923 11:26:51.780755   43161 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0923 11:26:51.780766   43161 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0923 11:26:51.780776   43161 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0923 11:26:51.780786   43161 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0923 11:26:51.780794   43161 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0923 11:26:51.780804   43161 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0923 11:26:51.780814   43161 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0923 11:26:51.780824   43161 command_runner.go:130] > # changing them here.
	I0923 11:26:51.780830   43161 command_runner.go:130] > # insecure_registries = [
	I0923 11:26:51.780838   43161 command_runner.go:130] > # ]
	I0923 11:26:51.780849   43161 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0923 11:26:51.780861   43161 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0923 11:26:51.780870   43161 command_runner.go:130] > # image_volumes = "mkdir"
	I0923 11:26:51.780879   43161 command_runner.go:130] > # Temporary directory to use for storing big files
	I0923 11:26:51.780889   43161 command_runner.go:130] > # big_files_temporary_dir = ""
	I0923 11:26:51.780901   43161 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0923 11:26:51.780909   43161 command_runner.go:130] > # CNI plugins.
	I0923 11:26:51.780917   43161 command_runner.go:130] > [crio.network]
	I0923 11:26:51.780926   43161 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0923 11:26:51.780937   43161 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0923 11:26:51.780947   43161 command_runner.go:130] > # cni_default_network = ""
	I0923 11:26:51.780954   43161 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0923 11:26:51.780964   43161 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0923 11:26:51.780971   43161 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0923 11:26:51.780980   43161 command_runner.go:130] > # plugin_dirs = [
	I0923 11:26:51.780986   43161 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0923 11:26:51.780993   43161 command_runner.go:130] > # ]
	I0923 11:26:51.781001   43161 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0923 11:26:51.781009   43161 command_runner.go:130] > [crio.metrics]
	I0923 11:26:51.781017   43161 command_runner.go:130] > # Globally enable or disable metrics support.
	I0923 11:26:51.781025   43161 command_runner.go:130] > enable_metrics = true
	I0923 11:26:51.781032   43161 command_runner.go:130] > # Specify enabled metrics collectors.
	I0923 11:26:51.781042   43161 command_runner.go:130] > # Per default all metrics are enabled.
	I0923 11:26:51.781051   43161 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0923 11:26:51.781064   43161 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0923 11:26:51.781075   43161 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0923 11:26:51.781084   43161 command_runner.go:130] > # metrics_collectors = [
	I0923 11:26:51.781090   43161 command_runner.go:130] > # 	"operations",
	I0923 11:26:51.781100   43161 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0923 11:26:51.781119   43161 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0923 11:26:51.781128   43161 command_runner.go:130] > # 	"operations_errors",
	I0923 11:26:51.781139   43161 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0923 11:26:51.781147   43161 command_runner.go:130] > # 	"image_pulls_by_name",
	I0923 11:26:51.781154   43161 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0923 11:26:51.781160   43161 command_runner.go:130] > # 	"image_pulls_failures",
	I0923 11:26:51.781166   43161 command_runner.go:130] > # 	"image_pulls_successes",
	I0923 11:26:51.781171   43161 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0923 11:26:51.781177   43161 command_runner.go:130] > # 	"image_layer_reuse",
	I0923 11:26:51.781181   43161 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0923 11:26:51.781187   43161 command_runner.go:130] > # 	"containers_oom_total",
	I0923 11:26:51.781192   43161 command_runner.go:130] > # 	"containers_oom",
	I0923 11:26:51.781198   43161 command_runner.go:130] > # 	"processes_defunct",
	I0923 11:26:51.781202   43161 command_runner.go:130] > # 	"operations_total",
	I0923 11:26:51.781209   43161 command_runner.go:130] > # 	"operations_latency_seconds",
	I0923 11:26:51.781214   43161 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0923 11:26:51.781220   43161 command_runner.go:130] > # 	"operations_errors_total",
	I0923 11:26:51.781224   43161 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0923 11:26:51.781231   43161 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0923 11:26:51.781235   43161 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0923 11:26:51.781241   43161 command_runner.go:130] > # 	"image_pulls_success_total",
	I0923 11:26:51.781246   43161 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0923 11:26:51.781251   43161 command_runner.go:130] > # 	"containers_oom_count_total",
	I0923 11:26:51.781256   43161 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0923 11:26:51.781263   43161 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0923 11:26:51.781266   43161 command_runner.go:130] > # ]
	I0923 11:26:51.781273   43161 command_runner.go:130] > # The port on which the metrics server will listen.
	I0923 11:26:51.781277   43161 command_runner.go:130] > # metrics_port = 9090
	I0923 11:26:51.781284   43161 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0923 11:26:51.781288   43161 command_runner.go:130] > # metrics_socket = ""
	I0923 11:26:51.781295   43161 command_runner.go:130] > # The certificate for the secure metrics server.
	I0923 11:26:51.781301   43161 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0923 11:26:51.781309   43161 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0923 11:26:51.781314   43161 command_runner.go:130] > # certificate on any modification event.
	I0923 11:26:51.781320   43161 command_runner.go:130] > # metrics_cert = ""
	I0923 11:26:51.781324   43161 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0923 11:26:51.781332   43161 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0923 11:26:51.781336   43161 command_runner.go:130] > # metrics_key = ""
	I0923 11:26:51.781343   43161 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0923 11:26:51.781349   43161 command_runner.go:130] > [crio.tracing]
	I0923 11:26:51.781355   43161 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0923 11:26:51.781361   43161 command_runner.go:130] > # enable_tracing = false
	I0923 11:26:51.781366   43161 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0923 11:26:51.781373   43161 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0923 11:26:51.781397   43161 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0923 11:26:51.781407   43161 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0923 11:26:51.781411   43161 command_runner.go:130] > # CRI-O NRI configuration.
	I0923 11:26:51.781416   43161 command_runner.go:130] > [crio.nri]
	I0923 11:26:51.781420   43161 command_runner.go:130] > # Globally enable or disable NRI.
	I0923 11:26:51.781426   43161 command_runner.go:130] > # enable_nri = false
	I0923 11:26:51.781430   43161 command_runner.go:130] > # NRI socket to listen on.
	I0923 11:26:51.781437   43161 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0923 11:26:51.781441   43161 command_runner.go:130] > # NRI plugin directory to use.
	I0923 11:26:51.781448   43161 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0923 11:26:51.781456   43161 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0923 11:26:51.781463   43161 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0923 11:26:51.781468   43161 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0923 11:26:51.781474   43161 command_runner.go:130] > # nri_disable_connections = false
	I0923 11:26:51.781480   43161 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0923 11:26:51.781486   43161 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0923 11:26:51.781491   43161 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0923 11:26:51.781498   43161 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0923 11:26:51.781504   43161 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0923 11:26:51.781510   43161 command_runner.go:130] > [crio.stats]
	I0923 11:26:51.781515   43161 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0923 11:26:51.781522   43161 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0923 11:26:51.781530   43161 command_runner.go:130] > # stats_collection_period = 0
	I0923 11:26:51.781601   43161 cni.go:84] Creating CNI manager for ""
	I0923 11:26:51.781614   43161 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 11:26:51.781622   43161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:26:51.781641   43161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-399279 NodeName:multinode-399279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:26:51.781797   43161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-399279"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:26:51.781863   43161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:26:51.792175   43161 command_runner.go:130] > kubeadm
	I0923 11:26:51.792194   43161 command_runner.go:130] > kubectl
	I0923 11:26:51.792198   43161 command_runner.go:130] > kubelet
	I0923 11:26:51.792218   43161 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:26:51.792271   43161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:26:51.801665   43161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0923 11:26:51.818351   43161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:26:51.834930   43161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 11:26:51.851417   43161 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0923 11:26:51.855252   43161 command_runner.go:130] > 192.168.39.71	control-plane.minikube.internal
	I0923 11:26:51.855332   43161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:26:51.994265   43161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:26:52.009810   43161 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279 for IP: 192.168.39.71
	I0923 11:26:52.009842   43161 certs.go:194] generating shared ca certs ...
	I0923 11:26:52.009864   43161 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:26:52.010040   43161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 11:26:52.010078   43161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 11:26:52.010088   43161 certs.go:256] generating profile certs ...
	I0923 11:26:52.010162   43161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/client.key
	I0923 11:26:52.010219   43161 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.key.43f0afc4
	I0923 11:26:52.010256   43161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.key
	I0923 11:26:52.010267   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 11:26:52.010282   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 11:26:52.010296   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 11:26:52.010308   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 11:26:52.010320   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 11:26:52.010332   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 11:26:52.010345   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 11:26:52.010357   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 11:26:52.010409   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 11:26:52.010437   43161 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 11:26:52.010446   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:26:52.010468   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:26:52.010489   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:26:52.010510   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 11:26:52.010547   43161 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:26:52.010596   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem -> /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.010611   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.010623   43161 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.011170   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:26:52.036640   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:26:52.063594   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:26:52.090000   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:26:52.114366   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 11:26:52.138524   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:26:52.163102   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:26:52.188818   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/multinode-399279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:26:52.212806   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 11:26:52.236411   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 11:26:52.260015   43161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:26:52.284616   43161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:26:52.301590   43161 ssh_runner.go:195] Run: openssl version
	I0923 11:26:52.307571   43161 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 11:26:52.307637   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 11:26:52.318493   43161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.322921   43161 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.323111   43161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.323152   43161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 11:26:52.328668   43161 command_runner.go:130] > 51391683
	I0923 11:26:52.328866   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 11:26:52.338526   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 11:26:52.349343   43161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.353729   43161 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.353784   43161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.353833   43161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 11:26:52.359428   43161 command_runner.go:130] > 3ec20f2e
	I0923 11:26:52.359496   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:26:52.369031   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:26:52.380531   43161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.384831   43161 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.385154   43161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.385196   43161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:26:52.390838   43161 command_runner.go:130] > b5213941
	I0923 11:26:52.391111   43161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:26:52.400596   43161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:26:52.405193   43161 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:26:52.405214   43161 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 11:26:52.405222   43161 command_runner.go:130] > Device: 253,1	Inode: 531240      Links: 1
	I0923 11:26:52.405231   43161 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 11:26:52.405240   43161 command_runner.go:130] > Access: 2024-09-23 11:20:02.448276730 +0000
	I0923 11:26:52.405247   43161 command_runner.go:130] > Modify: 2024-09-23 11:20:02.448276730 +0000
	I0923 11:26:52.405255   43161 command_runner.go:130] > Change: 2024-09-23 11:20:02.448276730 +0000
	I0923 11:26:52.405267   43161 command_runner.go:130] >  Birth: 2024-09-23 11:20:02.448276730 +0000
	I0923 11:26:52.405316   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:26:52.410731   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.410972   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:26:52.416442   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.416500   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:26:52.422025   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.422086   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:26:52.427562   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.427615   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:26:52.432853   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.433086   43161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:26:52.438374   43161 command_runner.go:130] > Certificate will not expire
	I0923 11:26:52.438514   43161 kubeadm.go:392] StartCluster: {Name:multinode-399279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-399279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.138 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:26:52.438609   43161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 11:26:52.438665   43161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:26:52.473735   43161 command_runner.go:130] > 46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05
	I0923 11:26:52.473764   43161 command_runner.go:130] > ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d
	I0923 11:26:52.473773   43161 command_runner.go:130] > 87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598
	I0923 11:26:52.473789   43161 command_runner.go:130] > e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53
	I0923 11:26:52.473799   43161 command_runner.go:130] > d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698
	I0923 11:26:52.473825   43161 command_runner.go:130] > 03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880
	I0923 11:26:52.473847   43161 command_runner.go:130] > a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51
	I0923 11:26:52.473978   43161 command_runner.go:130] > 1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0
	I0923 11:26:52.475327   43161 cri.go:89] found id: "46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05"
	I0923 11:26:52.475343   43161 cri.go:89] found id: "ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d"
	I0923 11:26:52.475348   43161 cri.go:89] found id: "87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598"
	I0923 11:26:52.475356   43161 cri.go:89] found id: "e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53"
	I0923 11:26:52.475360   43161 cri.go:89] found id: "d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698"
	I0923 11:26:52.475367   43161 cri.go:89] found id: "03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880"
	I0923 11:26:52.475371   43161 cri.go:89] found id: "a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51"
	I0923 11:26:52.475378   43161 cri.go:89] found id: "1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0"
	I0923 11:26:52.475386   43161 cri.go:89] found id: ""
	I0923 11:26:52.475438   43161 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.587253456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6f16853-f324-43da-b38d-bfdbbec0a0fe name=/runtime.v1.RuntimeService/Version
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.588513393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7d062a3-5127-4b09-9702-548d19fea19b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.589339185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091068589309587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7d062a3-5127-4b09-9702-548d19fea19b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.591636122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bbfdf8d-021f-4ff2-859e-7418b0f30069 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.591734468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bbfdf8d-021f-4ff2-859e-7418b0f30069 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.592758951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a929bac2c9af35373b3a391ab80b12ef0d068e8c124c282385bbcfc3bd77afb,PodSandboxId:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727090852308854422,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2,PodSandboxId:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727090818762462565,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c,PodSandboxId:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727090818702671178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a19b4acbb744498adbb752bad81cf1628c0379904fb98dd9790531c6ad5773,PodSandboxId:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727090818647855560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f,PodSandboxId:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727090818593836937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f,PodSandboxId:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727090814758235821,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2,PodSandboxId:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727090814745556229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be,PodSandboxId:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727090814696482867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381,PodSandboxId:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727090814692118037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff8654a48e6ba12401df225da883e18d28906348b268bf358931d56e91dc3b3,PodSandboxId:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727090486849847026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05,PodSandboxId:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727090429314951830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d,PodSandboxId:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727090429314652753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598,PodSandboxId:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727090417171395008,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53,PodSandboxId:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727090416998741435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e
-093ad73e616e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880,PodSandboxId:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727090406031853770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698,PodSandboxId:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727090406038261398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51,PodSandboxId:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727090405954863898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0,PodSandboxId:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727090405939039362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bbfdf8d-021f-4ff2-859e-7418b0f30069 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.643615747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=006d28c7-fcc3-44d5-9e77-289c6055cad8 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.643746364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=006d28c7-fcc3-44d5-9e77-289c6055cad8 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.644934015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1de31d7-fde2-4c8c-8e50-0bced079cc6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.645524267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091068645491586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1de31d7-fde2-4c8c-8e50-0bced079cc6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.646318351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55adc16b-1680-4f32-9ca6-2200e7a8d441 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.646402250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55adc16b-1680-4f32-9ca6-2200e7a8d441 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.646916656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a929bac2c9af35373b3a391ab80b12ef0d068e8c124c282385bbcfc3bd77afb,PodSandboxId:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727090852308854422,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2,PodSandboxId:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727090818762462565,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c,PodSandboxId:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727090818702671178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a19b4acbb744498adbb752bad81cf1628c0379904fb98dd9790531c6ad5773,PodSandboxId:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727090818647855560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f,PodSandboxId:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727090818593836937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f,PodSandboxId:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727090814758235821,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2,PodSandboxId:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727090814745556229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be,PodSandboxId:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727090814696482867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381,PodSandboxId:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727090814692118037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff8654a48e6ba12401df225da883e18d28906348b268bf358931d56e91dc3b3,PodSandboxId:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727090486849847026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05,PodSandboxId:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727090429314951830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d,PodSandboxId:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727090429314652753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598,PodSandboxId:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727090417171395008,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53,PodSandboxId:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727090416998741435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e
-093ad73e616e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880,PodSandboxId:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727090406031853770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698,PodSandboxId:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727090406038261398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51,PodSandboxId:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727090405954863898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0,PodSandboxId:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727090405939039362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55adc16b-1680-4f32-9ca6-2200e7a8d441 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.695798675Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b7069a7-c8b0-41ed-8194-7831bac21af9 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.695876789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b7069a7-c8b0-41ed-8194-7831bac21af9 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.696946032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c8e11d3-616f-4d81-98dc-9aaaa77f3251 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.697395482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091068697374302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c8e11d3-616f-4d81-98dc-9aaaa77f3251 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.698024226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05c65af7-d21d-4103-961d-d66199d4fe6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.698083574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05c65af7-d21d-4103-961d-d66199d4fe6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.698401868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a929bac2c9af35373b3a391ab80b12ef0d068e8c124c282385bbcfc3bd77afb,PodSandboxId:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727090852308854422,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2,PodSandboxId:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727090818762462565,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c,PodSandboxId:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727090818702671178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a19b4acbb744498adbb752bad81cf1628c0379904fb98dd9790531c6ad5773,PodSandboxId:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727090818647855560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f,PodSandboxId:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727090818593836937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f,PodSandboxId:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727090814758235821,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2,PodSandboxId:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727090814745556229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be,PodSandboxId:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727090814696482867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381,PodSandboxId:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727090814692118037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff8654a48e6ba12401df225da883e18d28906348b268bf358931d56e91dc3b3,PodSandboxId:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727090486849847026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05,PodSandboxId:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727090429314951830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d,PodSandboxId:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727090429314652753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598,PodSandboxId:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727090417171395008,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53,PodSandboxId:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727090416998741435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e
-093ad73e616e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880,PodSandboxId:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727090406031853770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698,PodSandboxId:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727090406038261398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51,PodSandboxId:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727090405954863898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0,PodSandboxId:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727090405939039362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05c65af7-d21d-4103-961d-d66199d4fe6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.720181973Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f444eb22-d106-40fa-81ec-77f4368d0e41 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.721076565Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-7b2xk,Uid:12825eb2-166d-444f-ab26-b7a6f5e1f7c2,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727090852151436196,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T11:26:57.979347041Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-czp4x,Uid:a933bede-5c72-410e-b65c-4f23724b46a0,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1727090818437748080,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T11:26:57.979348773Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5b19a17b-ee09-4591-b291-33694a7ea0ad,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727090818357146833,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-23T11:26:57.979356846Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&PodSandboxMetadata{Name:kindnet-qcbts,Uid:09e2cbc2-8fda-4c89-905e-7e4714aabf4c,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1727090818351384826,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T11:26:57.979351565Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&PodSandboxMetadata{Name:kube-proxy-fwq2c,Uid:2c4f69b2-34b6-439c-870e-093ad73e616e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727090818333188354,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T11:26:57.979354132Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&PodSandboxMetadata{Name:etcd-multinode-399279,Uid:b6458e62df86155bc018f93939090111,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727090814505755493,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.71:2379,kubernetes.io/config.hash: b6458e62df86155bc018f93939090111,kubernetes.io/config.seen: 2024-09-23T11:26:53.971892840Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-399279,Uid:94e95927154f4566cd0c24db5c0e8bed,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727090814492309679,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 94e95927154f4566cd0c24db5c0e8bed,kubernetes.io/config.seen: 2024-09-23T11:26:53.971898982Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-399279,Uid:b234119e32c3aeee06e4a906af119882,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727090814491062200,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-a
piserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.71:8443,kubernetes.io/config.hash: b234119e32c3aeee06e4a906af119882,kubernetes.io/config.seen: 2024-09-23T11:26:53.971896756Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-399279,Uid:9a04be2ca8d2577c7ca0098a0b025fb7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727090814487295183,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,tier: control-plane,},Annotations:map[string]string{kubernete
s.io/config.hash: 9a04be2ca8d2577c7ca0098a0b025fb7,kubernetes.io/config.seen: 2024-09-23T11:26:53.971898006Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-7b2xk,Uid:12825eb2-166d-444f-ab26-b7a6f5e1f7c2,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727090483637664166,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T11:21:23.327222382Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5b19a17b-ee09-4591-b291-33694a7ea0ad,Namespace:kube-system,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1727090429141731151,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\"
:\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-23T11:20:28.827717860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-czp4x,Uid:a933bede-5c72-410e-b65c-4f23724b46a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727090429141263352,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T11:20:28.834954015Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&PodSandboxMetadata{Name:kindnet-qcbts,Uid:09e2cbc2-8fda-4c89-905e-7e4714aabf4c,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727090416606367381,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T11:20:16.300328177Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&PodSandboxMetadata{Name:kube-proxy-fwq2c,Uid:2c4f69b2-34b6-439c-870e-093ad73e616e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727090416602106331,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,k8s-app: kube-
proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-23T11:20:16.288757101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-399279,Uid:94e95927154f4566cd0c24db5c0e8bed,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727090405783490398,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 94e95927154f4566cd0c24db5c0e8bed,kubernetes.io/config.seen: 2024-09-23T11:20:05.306094380Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&PodSandboxMetadata{Name:ku
be-apiserver-multinode-399279,Uid:b234119e32c3aeee06e4a906af119882,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727090405773484695,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.71:8443,kubernetes.io/config.hash: b234119e32c3aeee06e4a906af119882,kubernetes.io/config.seen: 2024-09-23T11:20:05.306092304Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-399279,Uid:9a04be2ca8d2577c7ca0098a0b025fb7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727090405767035520,Labels:map[string]string{component: kube-cont
roller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9a04be2ca8d2577c7ca0098a0b025fb7,kubernetes.io/config.seen: 2024-09-23T11:20:05.306093538Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&PodSandboxMetadata{Name:etcd-multinode-399279,Uid:b6458e62df86155bc018f93939090111,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727090405762055954,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://1
92.168.39.71:2379,kubernetes.io/config.hash: b6458e62df86155bc018f93939090111,kubernetes.io/config.seen: 2024-09-23T11:20:05.306087464Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f444eb22-d106-40fa-81ec-77f4368d0e41 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.722172145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ae7bf1a-00d7-4b38-97b1-8ebbfd17dd35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.722226359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ae7bf1a-00d7-4b38-97b1-8ebbfd17dd35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:31:08 multinode-399279 crio[2719]: time="2024-09-23 11:31:08.723184232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a929bac2c9af35373b3a391ab80b12ef0d068e8c124c282385bbcfc3bd77afb,PodSandboxId:9d6f4c17090e22161a48b85fc7e4bf6c0be5448c31769e7b6b390d57907f555d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727090852308854422,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2,PodSandboxId:323d824dc0d8c1cb31a1902d12ce22dbfef34d2bdf6597901f20db43082507bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727090818762462565,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c,PodSandboxId:effdb178fd9f7ff759b4cef7c002fdb837eb4c3881bab323f2c1f731ad1be106,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727090818702671178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a19b4acbb744498adbb752bad81cf1628c0379904fb98dd9790531c6ad5773,PodSandboxId:975e8f1a983c4723def502debbf26acff02e3f277d2ba147e771adba6890d7ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727090818647855560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f,PodSandboxId:230baf8529f984dadeee6bd5f7607ea0b8b606778b11a492bcf5441dc4727c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727090818593836937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e-093ad73e616e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f,PodSandboxId:8ea0d7e1e90acf65ca9217ec2b986cc41ca01633911f0827ba7d0f1ebafeaa39,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727090814758235821,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2,PodSandboxId:2b065e42ec7b8cf99b147e5dca951e1ba656e5d404c54d6af4b1a72883d663ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727090814745556229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d
79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be,PodSandboxId:1f61542c4ba1f86dc297bd511560cc13f62aadc04886493a2dd921aa0a88194d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727090814696482867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381,PodSandboxId:536b4b526836287c80dc7429b46f16353f7bdf79e7faabe51e367ce6de957682,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727090814692118037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff8654a48e6ba12401df225da883e18d28906348b268bf358931d56e91dc3b3,PodSandboxId:5475877e3bc02a2446c93d2b146f56d35323e60d5e39f7ae4f0ee9a3817a6711,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727090486849847026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7b2xk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12825eb2-166d-444f-ab26-b7a6f5e1f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e9fb7bc93a91fa2d4a81eb7c542abeaa9e8c81742ac05195c5163ba7ca1d05,PodSandboxId:353752d7e98830340b110169d83039074902542283ce228fc788195afe83549c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727090429314951830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b19a17b-ee09-4591-b291-33694a7ea0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d,PodSandboxId:8c07860c73cd568e80eeba32237e2ccd2635cf6f37e3f53bed75a0a4db25ace8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727090429314652753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-czp4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a933bede-5c72-410e-b65c-4f23724b46a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598,PodSandboxId:b70d53e90f5e897ffef03565a5852855ee23defef5bdee462f20dc44cecb39bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727090417171395008,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qcbts,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 09e2cbc2-8fda-4c89-905e-7e4714aabf4c,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53,PodSandboxId:1a14ce18b6c36f916406236d8ec05fe867682e90016991454365196b01f97159,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727090416998741435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fwq2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4f69b2-34b6-439c-870e
-093ad73e616e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880,PodSandboxId:0a11ca8d6fc13ad9595c998206b549364f1fc4e3af77a99723f432db6875f677,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727090406031853770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e95927154f4566cd0c24db5c0e8bed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698,PodSandboxId:f513a49252bbbfb17d1f5169046a117deffba9efca64e831d3cb641a47f4573f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727090406038261398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6458e62df86155bc018f93939090111,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51,PodSandboxId:8250e1c93d6db9ed4423f4d409b9aef876a02dcebf76bc0e5537f0f2f1ab96ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727090405954863898,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b234119e32c3aeee06e4a906af119882,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0,PodSandboxId:b548ec2f049be7b5aaf4b4fe2608a03f11d15c9d3c2fee05f74e874b8abf2778,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727090405939039362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-399279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a04be2ca8d2577c7ca0098a0b025fb7,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ae7bf1a-00d7-4b38-97b1-8ebbfd17dd35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a929bac2c9af       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9d6f4c17090e2       busybox-7dff88458-7b2xk
	8d372fd2cf2ff       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   323d824dc0d8c       kindnet-qcbts
	11c4c3aad6d39       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   effdb178fd9f7       coredns-7c65d6cfc9-czp4x
	76a19b4acbb74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   975e8f1a983c4       storage-provisioner
	1508d80d15a66       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   230baf8529f98       kube-proxy-fwq2c
	587c4f94f2349       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   8ea0d7e1e90ac       etcd-multinode-399279
	0920dd93b5fac       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   2b065e42ec7b8       kube-controller-manager-multinode-399279
	6ae00d08a26e7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   1f61542c4ba1f       kube-scheduler-multinode-399279
	aac4bf9cbc3d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   536b4b5268362       kube-apiserver-multinode-399279
	8ff8654a48e6b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   5475877e3bc02       busybox-7dff88458-7b2xk
	46e9fb7bc93a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   353752d7e9883       storage-provisioner
	ae8539595eedb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   8c07860c73cd5       coredns-7c65d6cfc9-czp4x
	87e705b8bdacd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   b70d53e90f5e8       kindnet-qcbts
	e0815b2e94fc6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   1a14ce18b6c36       kube-proxy-fwq2c
	d83ab98dc7840       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago      Exited              etcd                      0                   f513a49252bbb       etcd-multinode-399279
	03f8f7a5a8d6b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      11 minutes ago      Exited              kube-scheduler            0                   0a11ca8d6fc13       kube-scheduler-multinode-399279
	a957e4461eccd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      11 minutes ago      Exited              kube-apiserver            0                   8250e1c93d6db       kube-apiserver-multinode-399279
	1dcdb01009263       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      11 minutes ago      Exited              kube-controller-manager   0                   b548ec2f049be       kube-controller-manager-multinode-399279
	
	
	==> coredns [11c4c3aad6d3984f51085fb013e90864fb20df79b9c7b9e4bf9dc581a841238c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48557 - 55257 "HINFO IN 2321312220502510881.4191824833847128527. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.080795278s
	
	
	==> coredns [ae8539595eedb1b816b0bf321287104b6e899693033042cdf3957cb2f832481d] <==
	[INFO] 10.244.0.3:37249 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192168s
	[INFO] 10.244.0.3:34093 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072201s
	[INFO] 10.244.0.3:46390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003892s
	[INFO] 10.244.0.3:49193 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001238965s
	[INFO] 10.244.0.3:58221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045502s
	[INFO] 10.244.0.3:49543 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117993s
	[INFO] 10.244.0.3:34408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053576s
	[INFO] 10.244.1.2:46900 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164548s
	[INFO] 10.244.1.2:32935 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119032s
	[INFO] 10.244.1.2:39915 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126255s
	[INFO] 10.244.1.2:54010 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122174s
	[INFO] 10.244.0.3:42206 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108434s
	[INFO] 10.244.0.3:58877 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100649s
	[INFO] 10.244.0.3:44498 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068199s
	[INFO] 10.244.0.3:43306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071024s
	[INFO] 10.244.1.2:38445 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217153s
	[INFO] 10.244.1.2:50825 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000239401s
	[INFO] 10.244.1.2:54085 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000258958s
	[INFO] 10.244.1.2:58058 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000306327s
	[INFO] 10.244.0.3:36145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085742s
	[INFO] 10.244.0.3:49426 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077936s
	[INFO] 10.244.0.3:45842 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000049338s
	[INFO] 10.244.0.3:40634 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00003727s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-399279
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-399279
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=multinode-399279
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_20_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:20:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-399279
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:31:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:26:57 +0000   Mon, 23 Sep 2024 11:20:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:26:57 +0000   Mon, 23 Sep 2024 11:20:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:26:57 +0000   Mon, 23 Sep 2024 11:20:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:26:57 +0000   Mon, 23 Sep 2024 11:20:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    multinode-399279
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf91b75d1c864569a929cf8d7636034b
	  System UUID:                cf91b75d-1c86-4569-a929-cf8d7636034b
	  Boot ID:                    eed2b87b-8697-43e2-9a45-7bd2f53d2e87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7b2xk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 coredns-7c65d6cfc9-czp4x                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-399279                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-qcbts                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-399279             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-399279    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-fwq2c                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-399279             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node multinode-399279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node multinode-399279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node multinode-399279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-399279 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-399279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-399279 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-399279 event: Registered Node multinode-399279 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-399279 status is now: NodeReady
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node multinode-399279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node multinode-399279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m14s)  kubelet          Node multinode-399279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node multinode-399279 event: Registered Node multinode-399279 in Controller
	
	
	Name:               multinode-399279-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-399279-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=multinode-399279
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T11_27_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:27:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-399279-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:28:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 11:28:11 +0000   Mon, 23 Sep 2024 11:29:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 11:28:11 +0000   Mon, 23 Sep 2024 11:29:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 11:28:11 +0000   Mon, 23 Sep 2024 11:29:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 11:28:11 +0000   Mon, 23 Sep 2024 11:29:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    multinode-399279-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22d342f73c814c358b43cd34890b5f63
	  System UUID:                22d342f7-3c81-4c35-8b43-cd34890b5f63
	  Boot ID:                    2522cf5f-f7e8-470f-a435-11dbe540dbcc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4xxfg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 kindnet-84zhl              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-pdcm9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m24s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-399279-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-399279-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-399279-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m48s                  kubelet          Node multinode-399279-m02 status is now: NodeReady
	  Normal  Starting                 3m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m29s (x2 over 3m29s)  kubelet          Node multinode-399279-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s (x2 over 3m29s)  kubelet          Node multinode-399279-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s (x2 over 3m29s)  kubelet          Node multinode-399279-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m10s                  kubelet          Node multinode-399279-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-399279-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061864] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.167738] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.146097] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.275460] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[Sep23 11:20] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.427023] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.064317] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.485710] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.085617] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.206175] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.133414] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.878893] kauditd_printk_skb: 69 callbacks suppressed
	[Sep23 11:21] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 11:26] systemd-fstab-generator[2644]: Ignoring "noauto" option for root device
	[  +0.162670] systemd-fstab-generator[2656]: Ignoring "noauto" option for root device
	[  +0.175540] systemd-fstab-generator[2670]: Ignoring "noauto" option for root device
	[  +0.133471] systemd-fstab-generator[2682]: Ignoring "noauto" option for root device
	[  +0.290395] systemd-fstab-generator[2710]: Ignoring "noauto" option for root device
	[  +1.197489] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +1.862409] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +4.770005] kauditd_printk_skb: 184 callbacks suppressed
	[Sep23 11:27] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.108149] kauditd_printk_skb: 36 callbacks suppressed
	[ +16.963559] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [587c4f94f2349852dfe947dc2f695a754f7d6305f2bf962f77faad79d9cf939f] <==
	{"level":"info","ts":"2024-09-23T11:26:55.168575Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T11:26:55.168797Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"226d7ac4e2309206","initial-advertise-peer-urls":["https://192.168.39.71:2380"],"listen-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.71:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T11:26:55.168837Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T11:26:55.168890Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"226d7ac4e2309206","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-23T11:26:55.172111Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:26:55.174001Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:26:55.174043Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:26:55.174828Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-23T11:26:55.174874Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-23T11:26:55.808048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T11:26:55.808108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:26:55.808150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgPreVoteResp from 226d7ac4e2309206 at term 2"}
	{"level":"info","ts":"2024-09-23T11:26:55.808164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T11:26:55.808170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgVoteResp from 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2024-09-23T11:26:55.808179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T11:26:55.808186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226d7ac4e2309206 elected leader 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2024-09-23T11:26:55.818244Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"226d7ac4e2309206","local-member-attributes":"{Name:multinode-399279 ClientURLs:[https://192.168.39.71:2379]}","request-path":"/0/members/226d7ac4e2309206/attributes","cluster-id":"98fbf1e9ed6d9a6e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:26:55.818377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:26:55.819620Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:26:55.822766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.71:2379"}
	{"level":"info","ts":"2024-09-23T11:26:55.820010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:26:55.826024Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:26:55.841347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:26:55.844045Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:26:55.845656Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [d83ab98dc784041ef4e46d07ec523173b19481c25ae0dcac3c012fe9ec754698] <==
	{"level":"info","ts":"2024-09-23T11:21:07.381853Z","caller":"traceutil/trace.go:171","msg":"trace[922763614] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:510; }","duration":"366.656121ms","start":"2024-09-23T11:21:07.015189Z","end":"2024-09-23T11:21:07.381845Z","steps":["trace[922763614] 'agreement among raft nodes before linearized reading'  (duration: 366.606671ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:21:07.381749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.432943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-399279-m02\" ","response":"range_response_count:1 size:2893"}
	{"level":"info","ts":"2024-09-23T11:21:07.382346Z","caller":"traceutil/trace.go:171","msg":"trace[933645049] range","detail":"{range_begin:/registry/minions/multinode-399279-m02; range_end:; response_count:1; response_revision:510; }","duration":"125.041413ms","start":"2024-09-23T11:21:07.257296Z","end":"2024-09-23T11:21:07.382338Z","steps":["trace[933645049] 'agreement among raft nodes before linearized reading'  (duration: 124.408071ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:21:09.836849Z","caller":"traceutil/trace.go:171","msg":"trace[1974765449] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"165.50944ms","start":"2024-09-23T11:21:09.671325Z","end":"2024-09-23T11:21:09.836834Z","steps":["trace[1974765449] 'process raft request'  (duration: 165.239447ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:22:02.343234Z","caller":"traceutil/trace.go:171","msg":"trace[1251979350] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"229.692278ms","start":"2024-09-23T11:22:02.113509Z","end":"2024-09-23T11:22:02.343202Z","steps":["trace[1251979350] 'process raft request'  (duration: 140.707509ms)","trace[1251979350] 'compare'  (duration: 88.578407ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:22:04.226158Z","caller":"traceutil/trace.go:171","msg":"trace[1295102027] linearizableReadLoop","detail":"{readStateIndex:672; appliedIndex:671; }","duration":"126.914634ms","start":"2024-09-23T11:22:04.099219Z","end":"2024-09-23T11:22:04.226134Z","steps":["trace[1295102027] 'read index received'  (duration: 126.675435ms)","trace[1295102027] 'applied index is now lower than readState.Index'  (duration: 238.69µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:22:04.226257Z","caller":"traceutil/trace.go:171","msg":"trace[830481837] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"156.556745ms","start":"2024-09-23T11:22:04.069692Z","end":"2024-09-23T11:22:04.226249Z","steps":["trace[830481837] 'process raft request'  (duration: 156.243309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:22:04.226632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.385138ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-09-23T11:22:04.226699Z","caller":"traceutil/trace.go:171","msg":"trace[367210171] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:638; }","duration":"127.467922ms","start":"2024-09-23T11:22:04.099215Z","end":"2024-09-23T11:22:04.226683Z","steps":["trace[367210171] 'agreement among raft nodes before linearized reading'  (duration: 127.321016ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:22:04.471674Z","caller":"traceutil/trace.go:171","msg":"trace[1964086718] linearizableReadLoop","detail":"{readStateIndex:673; appliedIndex:672; }","duration":"237.377575ms","start":"2024-09-23T11:22:04.234261Z","end":"2024-09-23T11:22:04.471638Z","steps":["trace[1964086718] 'read index received'  (duration: 232.587623ms)","trace[1964086718] 'applied index is now lower than readState.Index'  (duration: 4.789342ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:22:04.471876Z","caller":"traceutil/trace.go:171","msg":"trace[1385562886] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"238.632541ms","start":"2024-09-23T11:22:04.233232Z","end":"2024-09-23T11:22:04.471865Z","steps":["trace[1385562886] 'process raft request'  (duration: 233.665946ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:22:04.472182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.907588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-f6k8p\" ","response":"range_response_count:1 size:3703"}
	{"level":"info","ts":"2024-09-23T11:22:04.472229Z","caller":"traceutil/trace.go:171","msg":"trace[1010372944] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-f6k8p; range_end:; response_count:1; response_revision:639; }","duration":"237.963059ms","start":"2024-09-23T11:22:04.234256Z","end":"2024-09-23T11:22:04.472220Z","steps":["trace[1010372944] 'agreement among raft nodes before linearized reading'  (duration: 237.828269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:22:04.472420Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.381297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-399279-m03\" ","response":"range_response_count:1 size:2824"}
	{"level":"info","ts":"2024-09-23T11:22:04.472464Z","caller":"traceutil/trace.go:171","msg":"trace[1107095018] range","detail":"{range_begin:/registry/minions/multinode-399279-m03; range_end:; response_count:1; response_revision:639; }","duration":"127.42897ms","start":"2024-09-23T11:22:04.345029Z","end":"2024-09-23T11:22:04.472458Z","steps":["trace[1107095018] 'agreement among raft nodes before linearized reading'  (duration: 127.320061ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:25:18.590062Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T11:25:18.590192Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-399279","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"]}
	{"level":"warn","ts":"2024-09-23T11:25:18.590322Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:25:18.590424Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:25:18.668285Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.71:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:25:18.668376Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.71:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T11:25:18.668571Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"226d7ac4e2309206","current-leader-member-id":"226d7ac4e2309206"}
	{"level":"info","ts":"2024-09-23T11:25:18.671641Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-23T11:25:18.671883Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-23T11:25:18.672037Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-399279","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"]}
	
	
	==> kernel <==
	 11:31:09 up 11 min,  0 users,  load average: 0.21, 0.25, 0.17
	Linux multinode-399279 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [87e705b8bdacd2c032ce10b901a6b52f196613e3c30026277c571b16c838d598] <==
	I0923 11:24:38.370664       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:24:48.370127       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:24:48.370194       1 main.go:299] handling current node
	I0923 11:24:48.370214       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:24:48.370220       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:24:48.370402       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:24:48.370429       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:24:58.371349       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:24:58.371439       1 main.go:299] handling current node
	I0923 11:24:58.371458       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:24:58.371466       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:24:58.371649       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:24:58.371673       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:25:08.365809       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:25:08.365925       1 main.go:299] handling current node
	I0923 11:25:08.366037       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:25:08.366073       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:25:08.366232       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:25:08.366254       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:25:18.363218       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:25:18.363261       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:25:18.363357       1 main.go:295] Handling node with IPs: map[192.168.39.138:{}]
	I0923 11:25:18.363382       1 main.go:322] Node multinode-399279-m03 has CIDR [10.244.3.0/24] 
	I0923 11:25:18.363496       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:25:18.363538       1 main.go:299] handling current node
	
	
	==> kindnet [8d372fd2cf2ff7ca54424ecede6007d2d21364846ec8c0faae9636aa31b84db2] <==
	I0923 11:29:59.757137       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:30:09.763062       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:30:09.763216       1 main.go:299] handling current node
	I0923 11:30:09.763267       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:30:09.763288       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:30:19.765387       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:30:19.765530       1 main.go:299] handling current node
	I0923 11:30:19.765570       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:30:19.765626       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:30:29.765117       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:30:29.765325       1 main.go:299] handling current node
	I0923 11:30:29.765367       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:30:29.765389       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:30:39.756127       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:30:39.756194       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:30:39.756440       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:30:39.756468       1 main.go:299] handling current node
	I0923 11:30:49.756049       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:30:49.756123       1 main.go:299] handling current node
	I0923 11:30:49.756155       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:30:49.756164       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	I0923 11:30:59.756411       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0923 11:30:59.756712       1 main.go:299] handling current node
	I0923 11:30:59.756764       1 main.go:295] Handling node with IPs: map[192.168.39.60:{}]
	I0923 11:30:59.756795       1 main.go:322] Node multinode-399279-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a957e4461eccde684f516492d392f95f817b5dac5d1276905a71d18df7ba7b51] <==
	W0923 11:25:18.626503       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:25:18.626648       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:25:18.627012       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I0923 11:25:18.627478       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0923 11:25:18.627668       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0923 11:25:18.627854       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0923 11:25:18.627907       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0923 11:25:18.627939       1 controller.go:132] Ending legacy_token_tracking_controller
	I0923 11:25:18.628048       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0923 11:25:18.628081       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0923 11:25:18.628172       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0923 11:25:18.628239       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0923 11:25:18.628292       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0923 11:25:18.628320       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0923 11:25:18.628443       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0923 11:25:18.628600       1 naming_controller.go:305] Shutting down NamingConditionController
	I0923 11:25:18.628659       1 controller.go:170] Shutting down OpenAPI controller
	I0923 11:25:18.628731       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0923 11:25:18.628915       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0923 11:25:18.629037       1 establishing_controller.go:92] Shutting down EstablishingController
	I0923 11:25:18.630043       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 11:25:18.630923       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 11:25:18.631015       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 11:25:18.631038       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0923 11:25:18.631108       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	
	
	==> kube-apiserver [aac4bf9cbc3d6b65284d8ca786743bdf4651dd486827de1bbe17a5e929df8381] <==
	I0923 11:26:57.777878       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:26:57.778022       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 11:26:57.778190       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 11:26:57.779368       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 11:26:57.779880       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 11:26:57.780165       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 11:26:57.780286       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 11:26:57.784316       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:26:57.784466       1 policy_source.go:224] refreshing policies
	I0923 11:26:57.794889       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 11:26:57.795074       1 aggregator.go:171] initial CRD sync complete...
	I0923 11:26:57.795112       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 11:26:57.795135       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 11:26:57.795158       1 cache.go:39] Caches are synced for autoregister controller
	I0923 11:26:57.802628       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0923 11:26:57.807797       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 11:26:57.841210       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:26:58.694664       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 11:27:00.000810       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 11:27:00.111327       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 11:27:00.123684       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 11:27:00.201259       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 11:27:00.211465       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 11:27:01.279594       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 11:27:01.473754       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0920dd93b5facadc358b03b60bee1b14cd89a179211751ff3f01a704863c50f2] <==
	I0923 11:28:19.001778       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-399279-m03" podCIDRs=["10.244.2.0/24"]
	I0923 11:28:19.001819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:19.001841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:19.011849       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:19.037929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:19.386396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:21.209327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:29.308349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:38.846854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:38.847059       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:28:38.858897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:41.137882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:43.521749       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:43.542320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:44.097186       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:28:44.097266       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:29:26.160392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:29:26.178265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:29:26.189426       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.804132ms"
	I0923 11:29:26.189600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.67µs"
	I0923 11:29:31.266732       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:29:41.037497       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-f6k8p"
	I0923 11:29:41.072308       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-f6k8p"
	I0923 11:29:41.073234       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-fxxlf"
	I0923 11:29:41.121890       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-fxxlf"
	
	
	==> kube-controller-manager [1dcdb010092636aa88012859284276647c537ce71d455e544c97bff4e51146a0] <==
	I0923 11:22:52.235422       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:22:52.235585       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.201944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:22:53.206466       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-399279-m03\" does not exist"
	I0923 11:22:53.214410       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-399279-m03" podCIDRs=["10.244.3.0/24"]
	I0923 11:22:53.214662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.215314       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.224699       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.278703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:53.611952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:22:55.681774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:03.503374       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:13.072889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:23:13.072950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:13.084686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:15.641474       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:55.660546       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-399279-m02"
	I0923 11:23:55.660736       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:55.665340       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:23:55.696395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:23:55.705625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	I0923 11:23:55.790294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.850805ms"
	I0923 11:23:55.790564       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="158.284µs"
	I0923 11:24:00.797681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m03"
	I0923 11:24:10.875869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-399279-m02"
	
	
	==> kube-proxy [1508d80d15a66ebeded02fb7f6bcc1944c73d899ed4783471d0242f45f63380f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:26:59.050299       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 11:26:59.075594       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.71"]
	E0923 11:26:59.076120       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:26:59.169155       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:26:59.169201       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:26:59.169229       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:26:59.173218       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:26:59.173467       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:26:59.173494       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:26:59.177608       1 config.go:199] "Starting service config controller"
	I0923 11:26:59.177644       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:26:59.178806       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:26:59.178832       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:26:59.180638       1 config.go:328] "Starting node config controller"
	I0923 11:26:59.180747       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:26:59.279354       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:26:59.279439       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:26:59.280807       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e0815b2e94fc6b1519a747b04e450c3f4123d660919d0f0726c6028f000b0c53] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:20:17.522630       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 11:20:17.536227       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.71"]
	E0923 11:20:17.536343       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:20:17.575879       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:20:17.575938       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:20:17.576018       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:20:17.578651       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:20:17.579185       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:20:17.579213       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:20:17.581037       1 config.go:199] "Starting service config controller"
	I0923 11:20:17.581066       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:20:17.581090       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:20:17.581094       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:20:17.581451       1 config.go:328] "Starting node config controller"
	I0923 11:20:17.581482       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:20:17.681858       1 shared_informer.go:320] Caches are synced for node config
	I0923 11:20:17.681892       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:20:17.681907       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03f8f7a5a8d6b60512ae2ee0ae5934ee4b92e958178eb0750e33ab4350804880] <==
	E0923 11:20:08.627433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:08.627688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:20:08.627722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.473093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:20:09.473204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.522772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:20:09.523460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.638000       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 11:20:09.639326       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 11:20:09.679617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:20:09.679843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.681245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 11:20:09.681360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.689302       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:20:09.689410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.699339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:20:09.699437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.773052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:20:09.774347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:20:09.823156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:20:09.823466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0923 11:20:11.816949       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:25:18.596877       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0923 11:25:18.597057       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0923 11:25:18.600086       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6ae00d08a26e721587aa3600856a96f58a49d68bb12cd75792c8a0c62ae610be] <==
	I0923 11:26:55.653160       1 serving.go:386] Generated self-signed cert in-memory
	W0923 11:26:57.717360       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 11:26:57.717565       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 11:26:57.717649       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 11:26:57.717678       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 11:26:57.777665       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 11:26:57.777716       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:26:57.791443       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 11:26:57.791506       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:26:57.794208       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 11:26:57.794289       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 11:26:57.892264       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:29:54 multinode-399279 kubelet[2933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 11:29:54 multinode-399279 kubelet[2933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 11:29:54 multinode-399279 kubelet[2933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 11:29:54 multinode-399279 kubelet[2933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 11:29:54 multinode-399279 kubelet[2933]: E0923 11:29:54.085693    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090994085481835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:29:54 multinode-399279 kubelet[2933]: E0923 11:29:54.085736    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727090994085481835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:04 multinode-399279 kubelet[2933]: E0923 11:30:04.086941    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091004086610186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:04 multinode-399279 kubelet[2933]: E0923 11:30:04.087160    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091004086610186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:14 multinode-399279 kubelet[2933]: E0923 11:30:14.089196    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091014088712656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:14 multinode-399279 kubelet[2933]: E0923 11:30:14.089645    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091014088712656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:24 multinode-399279 kubelet[2933]: E0923 11:30:24.094170    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091024093849487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:24 multinode-399279 kubelet[2933]: E0923 11:30:24.094447    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091024093849487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:34 multinode-399279 kubelet[2933]: E0923 11:30:34.095756    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091034095228312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:34 multinode-399279 kubelet[2933]: E0923 11:30:34.096379    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091034095228312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:44 multinode-399279 kubelet[2933]: E0923 11:30:44.099513    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091044098872177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:44 multinode-399279 kubelet[2933]: E0923 11:30:44.099553    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091044098872177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:54 multinode-399279 kubelet[2933]: E0923 11:30:54.050665    2933 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 11:30:54 multinode-399279 kubelet[2933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 11:30:54 multinode-399279 kubelet[2933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 11:30:54 multinode-399279 kubelet[2933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 11:30:54 multinode-399279 kubelet[2933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 11:30:54 multinode-399279 kubelet[2933]: E0923 11:30:54.101272    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091054100892699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:30:54 multinode-399279 kubelet[2933]: E0923 11:30:54.101315    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091054100892699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:31:04 multinode-399279 kubelet[2933]: E0923 11:31:04.102779    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091064102326061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 11:31:04 multinode-399279 kubelet[2933]: E0923 11:31:04.102814    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091064102326061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:31:08.263036   45159 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19689-3961/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-399279 -n multinode-399279
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-399279 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.62s)

                                                
                                    
x
+
TestPreload (173.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-431525 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0923 11:35:57.441207   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-431525 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m29.300168224s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-431525 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-431525 image pull gcr.io/k8s-minikube/busybox: (3.297917956s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-431525
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-431525: (7.299694426s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-431525 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-431525 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.328864177s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-431525 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-23 11:37:53.348649006 +0000 UTC m=+4609.417967583
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-431525 -n test-preload-431525
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-431525 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-431525 logs -n 25: (1.079166051s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279 sudo cat                                       | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m03_multinode-399279.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt                       | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m02:/home/docker/cp-test_multinode-399279-m03_multinode-399279-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n                                                                 | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | multinode-399279-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-399279 ssh -n multinode-399279-m02 sudo cat                                   | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-399279-m03_multinode-399279-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-399279 node stop m03                                                          | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:22 UTC |
	| node    | multinode-399279 node start                                                             | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:22 UTC | 23 Sep 24 11:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-399279                                                                | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:23 UTC |                     |
	| stop    | -p multinode-399279                                                                     | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:23 UTC |                     |
	| start   | -p multinode-399279                                                                     | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:28 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-399279                                                                | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:28 UTC |                     |
	| node    | multinode-399279 node delete                                                            | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-399279 stop                                                                   | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:28 UTC |                     |
	| start   | -p multinode-399279                                                                     | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:34 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-399279                                                                | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:34 UTC |                     |
	| start   | -p multinode-399279-m02                                                                 | multinode-399279-m02 | jenkins | v1.34.0 | 23 Sep 24 11:34 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-399279-m03                                                                 | multinode-399279-m03 | jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:34 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-399279                                                                 | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:34 UTC |                     |
	| delete  | -p multinode-399279-m03                                                                 | multinode-399279-m03 | jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:34 UTC |
	| delete  | -p multinode-399279                                                                     | multinode-399279     | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | -p test-preload-431525                                                                  | test-preload-431525  | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:36 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-431525 image pull                                                          | test-preload-431525  | jenkins | v1.34.0 | 23 Sep 24 11:36 UTC | 23 Sep 24 11:36 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-431525                                                                  | test-preload-431525  | jenkins | v1.34.0 | 23 Sep 24 11:36 UTC | 23 Sep 24 11:36 UTC |
	| start   | -p test-preload-431525                                                                  | test-preload-431525  | jenkins | v1.34.0 | 23 Sep 24 11:36 UTC | 23 Sep 24 11:37 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-431525 image list                                                          | test-preload-431525  | jenkins | v1.34.0 | 23 Sep 24 11:37 UTC | 23 Sep 24 11:37 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:36:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:36:42.843701   47584 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:36:42.843931   47584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:36:42.843939   47584 out.go:358] Setting ErrFile to fd 2...
	I0923 11:36:42.843944   47584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:36:42.844177   47584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:36:42.844697   47584 out.go:352] Setting JSON to false
	I0923 11:36:42.845570   47584 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4746,"bootTime":1727086657,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:36:42.845663   47584 start.go:139] virtualization: kvm guest
	I0923 11:36:42.847853   47584 out.go:177] * [test-preload-431525] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:36:42.849005   47584 notify.go:220] Checking for updates...
	I0923 11:36:42.849063   47584 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:36:42.850457   47584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:36:42.851696   47584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:36:42.852838   47584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:36:42.854057   47584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:36:42.855348   47584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:36:42.856949   47584 config.go:182] Loaded profile config "test-preload-431525": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0923 11:36:42.857401   47584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:36:42.857461   47584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:36:42.871791   47584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39441
	I0923 11:36:42.872240   47584 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:36:42.872827   47584 main.go:141] libmachine: Using API Version  1
	I0923 11:36:42.872846   47584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:36:42.873176   47584 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:36:42.873362   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:36:42.874981   47584 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 11:36:42.876121   47584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:36:42.876444   47584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:36:42.876484   47584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:36:42.890877   47584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0923 11:36:42.891269   47584 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:36:42.891699   47584 main.go:141] libmachine: Using API Version  1
	I0923 11:36:42.891724   47584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:36:42.892061   47584 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:36:42.892312   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:36:42.926713   47584 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 11:36:42.927816   47584 start.go:297] selected driver: kvm2
	I0923 11:36:42.927826   47584 start.go:901] validating driver "kvm2" against &{Name:test-preload-431525 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-431525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:36:42.927929   47584 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:36:42.928601   47584 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:36:42.928669   47584 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:36:42.943250   47584 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:36:42.943582   47584 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:36:42.943616   47584 cni.go:84] Creating CNI manager for ""
	I0923 11:36:42.943659   47584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 11:36:42.943708   47584 start.go:340] cluster config:
	{Name:test-preload-431525 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-431525 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:36:42.943811   47584 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:36:42.945462   47584 out.go:177] * Starting "test-preload-431525" primary control-plane node in "test-preload-431525" cluster
	I0923 11:36:42.946705   47584 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0923 11:36:43.072433   47584 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0923 11:36:43.072466   47584 cache.go:56] Caching tarball of preloaded images
	I0923 11:36:43.072640   47584 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0923 11:36:43.074459   47584 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0923 11:36:43.075633   47584 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0923 11:36:43.198092   47584 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0923 11:36:55.751951   47584 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0923 11:36:55.752048   47584 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0923 11:36:56.590000   47584 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0923 11:36:56.590112   47584 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/config.json ...
	I0923 11:36:56.590345   47584 start.go:360] acquireMachinesLock for test-preload-431525: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:36:56.590405   47584 start.go:364] duration metric: took 39.49µs to acquireMachinesLock for "test-preload-431525"
	I0923 11:36:56.590419   47584 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:36:56.590425   47584 fix.go:54] fixHost starting: 
	I0923 11:36:56.590669   47584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:36:56.590700   47584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:36:56.605323   47584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39585
	I0923 11:36:56.605794   47584 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:36:56.606273   47584 main.go:141] libmachine: Using API Version  1
	I0923 11:36:56.606301   47584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:36:56.606586   47584 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:36:56.606842   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:36:56.607004   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetState
	I0923 11:36:56.608659   47584 fix.go:112] recreateIfNeeded on test-preload-431525: state=Stopped err=<nil>
	I0923 11:36:56.608697   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	W0923 11:36:56.608846   47584 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:36:56.611219   47584 out.go:177] * Restarting existing kvm2 VM for "test-preload-431525" ...
	I0923 11:36:56.612626   47584 main.go:141] libmachine: (test-preload-431525) Calling .Start
	I0923 11:36:56.612807   47584 main.go:141] libmachine: (test-preload-431525) Ensuring networks are active...
	I0923 11:36:56.613599   47584 main.go:141] libmachine: (test-preload-431525) Ensuring network default is active
	I0923 11:36:56.613842   47584 main.go:141] libmachine: (test-preload-431525) Ensuring network mk-test-preload-431525 is active
	I0923 11:36:56.614293   47584 main.go:141] libmachine: (test-preload-431525) Getting domain xml...
	I0923 11:36:56.615136   47584 main.go:141] libmachine: (test-preload-431525) Creating domain...
	I0923 11:36:57.807684   47584 main.go:141] libmachine: (test-preload-431525) Waiting to get IP...
	I0923 11:36:57.808584   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:36:57.808980   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:36:57.809050   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:36:57.808968   47667 retry.go:31] will retry after 209.611129ms: waiting for machine to come up
	I0923 11:36:58.020452   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:36:58.020872   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:36:58.020896   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:36:58.020824   47667 retry.go:31] will retry after 254.762519ms: waiting for machine to come up
	I0923 11:36:58.277267   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:36:58.277728   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:36:58.277752   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:36:58.277678   47667 retry.go:31] will retry after 369.274673ms: waiting for machine to come up
	I0923 11:36:58.647996   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:36:58.648437   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:36:58.648463   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:36:58.648362   47667 retry.go:31] will retry after 525.194919ms: waiting for machine to come up
	I0923 11:36:59.174911   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:36:59.175294   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:36:59.175323   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:36:59.175237   47667 retry.go:31] will retry after 639.196698ms: waiting for machine to come up
	I0923 11:36:59.816081   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:36:59.816650   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:36:59.816673   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:36:59.816595   47667 retry.go:31] will retry after 637.764877ms: waiting for machine to come up
	I0923 11:37:00.456430   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:00.456752   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:37:00.456773   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:37:00.456707   47667 retry.go:31] will retry after 1.024141559s: waiting for machine to come up
	I0923 11:37:01.482436   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:01.482801   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:37:01.482830   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:37:01.482747   47667 retry.go:31] will retry after 1.043528451s: waiting for machine to come up
	I0923 11:37:02.527824   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:02.528223   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:37:02.528246   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:37:02.528174   47667 retry.go:31] will retry after 1.550571666s: waiting for machine to come up
	I0923 11:37:04.080997   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:04.081577   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:37:04.081604   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:37:04.081528   47667 retry.go:31] will retry after 2.15042099s: waiting for machine to come up
	I0923 11:37:06.234635   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:06.235043   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:37:06.235074   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:37:06.234993   47667 retry.go:31] will retry after 2.196544951s: waiting for machine to come up
	I0923 11:37:08.434105   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:08.434493   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:37:08.434516   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:37:08.434459   47667 retry.go:31] will retry after 2.532181276s: waiting for machine to come up
	I0923 11:37:10.970160   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:10.970662   47584 main.go:141] libmachine: (test-preload-431525) DBG | unable to find current IP address of domain test-preload-431525 in network mk-test-preload-431525
	I0923 11:37:10.970691   47584 main.go:141] libmachine: (test-preload-431525) DBG | I0923 11:37:10.970630   47667 retry.go:31] will retry after 3.016775083s: waiting for machine to come up
	I0923 11:37:13.990553   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:13.990936   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has current primary IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:13.990956   47584 main.go:141] libmachine: (test-preload-431525) Found IP for machine: 192.168.39.54
	I0923 11:37:13.990988   47584 main.go:141] libmachine: (test-preload-431525) Reserving static IP address...
	I0923 11:37:13.991320   47584 main.go:141] libmachine: (test-preload-431525) Reserved static IP address: 192.168.39.54
	I0923 11:37:13.991340   47584 main.go:141] libmachine: (test-preload-431525) Waiting for SSH to be available...
	I0923 11:37:13.991359   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "test-preload-431525", mac: "52:54:00:74:e5:df", ip: "192.168.39.54"} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:13.991384   47584 main.go:141] libmachine: (test-preload-431525) DBG | skip adding static IP to network mk-test-preload-431525 - found existing host DHCP lease matching {name: "test-preload-431525", mac: "52:54:00:74:e5:df", ip: "192.168.39.54"}
	I0923 11:37:13.991405   47584 main.go:141] libmachine: (test-preload-431525) DBG | Getting to WaitForSSH function...
	I0923 11:37:13.993686   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:13.993927   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:13.993955   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:13.994090   47584 main.go:141] libmachine: (test-preload-431525) DBG | Using SSH client type: external
	I0923 11:37:13.994110   47584 main.go:141] libmachine: (test-preload-431525) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/test-preload-431525/id_rsa (-rw-------)
	I0923 11:37:13.994147   47584 main.go:141] libmachine: (test-preload-431525) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/test-preload-431525/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 11:37:13.994160   47584 main.go:141] libmachine: (test-preload-431525) DBG | About to run SSH command:
	I0923 11:37:13.994171   47584 main.go:141] libmachine: (test-preload-431525) DBG | exit 0
	I0923 11:37:14.117598   47584 main.go:141] libmachine: (test-preload-431525) DBG | SSH cmd err, output: <nil>: 
	I0923 11:37:14.117897   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetConfigRaw
	I0923 11:37:14.118512   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetIP
	I0923 11:37:14.120894   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.121209   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:14.121236   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.121480   47584 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/config.json ...
	I0923 11:37:14.121710   47584 machine.go:93] provisionDockerMachine start ...
	I0923 11:37:14.121731   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:37:14.121925   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:14.123941   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.124264   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:14.124290   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.124376   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:14.124558   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.124697   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.124822   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:14.124954   47584 main.go:141] libmachine: Using SSH client type: native
	I0923 11:37:14.125147   47584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0923 11:37:14.125163   47584 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:37:14.225543   47584 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 11:37:14.225578   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetMachineName
	I0923 11:37:14.225941   47584 buildroot.go:166] provisioning hostname "test-preload-431525"
	I0923 11:37:14.225964   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetMachineName
	I0923 11:37:14.226197   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:14.228731   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.229060   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:14.229086   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.229234   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:14.229413   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.229580   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.229717   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:14.229853   47584 main.go:141] libmachine: Using SSH client type: native
	I0923 11:37:14.230019   47584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0923 11:37:14.230033   47584 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-431525 && echo "test-preload-431525" | sudo tee /etc/hostname
	I0923 11:37:14.344329   47584 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-431525
	
	I0923 11:37:14.344355   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:14.346759   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.347111   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:14.347142   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.347320   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:14.347511   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.347674   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.347825   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:14.347980   47584 main.go:141] libmachine: Using SSH client type: native
	I0923 11:37:14.348149   47584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0923 11:37:14.348166   47584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-431525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-431525/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-431525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:37:14.458659   47584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:37:14.458687   47584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 11:37:14.458731   47584 buildroot.go:174] setting up certificates
	I0923 11:37:14.458744   47584 provision.go:84] configureAuth start
	I0923 11:37:14.458764   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetMachineName
	I0923 11:37:14.459058   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetIP
	I0923 11:37:14.461597   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.461902   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:14.461922   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.462122   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:14.464452   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.464713   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:14.464741   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.464820   47584 provision.go:143] copyHostCerts
	I0923 11:37:14.464871   47584 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 11:37:14.464888   47584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:37:14.464953   47584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 11:37:14.465109   47584 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 11:37:14.465118   47584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:37:14.465145   47584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 11:37:14.465218   47584 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 11:37:14.465225   47584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:37:14.465247   47584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 11:37:14.465322   47584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.test-preload-431525 san=[127.0.0.1 192.168.39.54 localhost minikube test-preload-431525]
	I0923 11:37:14.711323   47584 provision.go:177] copyRemoteCerts
	I0923 11:37:14.711387   47584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:37:14.711413   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:14.713916   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.714294   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:14.714325   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.714442   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:14.714623   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.714770   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:14.714888   47584 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/test-preload-431525/id_rsa Username:docker}
	I0923 11:37:14.795305   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 11:37:14.819910   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:37:14.847968   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:37:14.872262   47584 provision.go:87] duration metric: took 413.506262ms to configureAuth
	I0923 11:37:14.872290   47584 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:37:14.872465   47584 config.go:182] Loaded profile config "test-preload-431525": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0923 11:37:14.872546   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:14.875125   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.875498   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:14.875522   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:14.875705   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:14.875897   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.876052   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:14.876207   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:14.876376   47584 main.go:141] libmachine: Using SSH client type: native
	I0923 11:37:14.876550   47584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0923 11:37:14.876565   47584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 11:37:15.096689   47584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 11:37:15.096716   47584 machine.go:96] duration metric: took 974.992665ms to provisionDockerMachine
	I0923 11:37:15.096728   47584 start.go:293] postStartSetup for "test-preload-431525" (driver="kvm2")
	I0923 11:37:15.096742   47584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:37:15.096759   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:37:15.097046   47584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:37:15.097074   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:15.099865   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.100246   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:15.100272   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.100402   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:15.100582   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:15.100709   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:15.100835   47584 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/test-preload-431525/id_rsa Username:docker}
	I0923 11:37:15.184411   47584 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:37:15.189040   47584 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:37:15.189059   47584 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 11:37:15.189116   47584 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 11:37:15.189188   47584 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 11:37:15.189276   47584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:37:15.199283   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:37:15.226739   47584 start.go:296] duration metric: took 129.997092ms for postStartSetup
	I0923 11:37:15.226774   47584 fix.go:56] duration metric: took 18.636349308s for fixHost
	I0923 11:37:15.226813   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:15.229300   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.229638   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:15.229660   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.229850   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:15.230037   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:15.230156   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:15.230257   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:15.230395   47584 main.go:141] libmachine: Using SSH client type: native
	I0923 11:37:15.230548   47584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0923 11:37:15.230556   47584 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:37:15.330014   47584 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091435.304105011
	
	I0923 11:37:15.330038   47584 fix.go:216] guest clock: 1727091435.304105011
	I0923 11:37:15.330045   47584 fix.go:229] Guest: 2024-09-23 11:37:15.304105011 +0000 UTC Remote: 2024-09-23 11:37:15.226798026 +0000 UTC m=+32.417117359 (delta=77.306985ms)
	I0923 11:37:15.330085   47584 fix.go:200] guest clock delta is within tolerance: 77.306985ms
	I0923 11:37:15.330093   47584 start.go:83] releasing machines lock for "test-preload-431525", held for 18.739678618s
	I0923 11:37:15.330115   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:37:15.330380   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetIP
	I0923 11:37:15.332759   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.333045   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:15.333071   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.333197   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:37:15.333677   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:37:15.333799   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:37:15.333884   47584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:37:15.333920   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:15.333957   47584 ssh_runner.go:195] Run: cat /version.json
	I0923 11:37:15.333980   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:15.336443   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.336692   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.336800   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:15.336851   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.336981   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:15.337091   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:15.337117   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:15.337136   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:15.337303   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:15.337311   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:15.337461   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:15.337484   47584 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/test-preload-431525/id_rsa Username:docker}
	I0923 11:37:15.337557   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:15.337689   47584 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/test-preload-431525/id_rsa Username:docker}
	I0923 11:37:15.410562   47584 ssh_runner.go:195] Run: systemctl --version
	I0923 11:37:15.446648   47584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 11:37:15.600308   47584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:37:15.607064   47584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:37:15.607129   47584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:37:15.623693   47584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 11:37:15.623720   47584 start.go:495] detecting cgroup driver to use...
	I0923 11:37:15.623807   47584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:37:15.639844   47584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:37:15.653795   47584 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:37:15.653848   47584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:37:15.667624   47584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:37:15.681581   47584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:37:15.798437   47584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:37:15.942531   47584 docker.go:233] disabling docker service ...
	I0923 11:37:15.942587   47584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:37:15.957368   47584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:37:15.970664   47584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:37:16.103078   47584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:37:16.225887   47584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:37:16.239495   47584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:37:16.257462   47584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0923 11:37:16.257534   47584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:37:16.267455   47584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 11:37:16.267531   47584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:37:16.277490   47584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:37:16.287419   47584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:37:16.297322   47584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:37:16.307615   47584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:37:16.317391   47584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:37:16.334578   47584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:37:16.344637   47584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:37:16.353726   47584 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 11:37:16.353803   47584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 11:37:16.366048   47584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:37:16.375624   47584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:37:16.498084   47584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 11:37:16.586300   47584 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 11:37:16.586375   47584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 11:37:16.591426   47584 start.go:563] Will wait 60s for crictl version
	I0923 11:37:16.591481   47584 ssh_runner.go:195] Run: which crictl
	I0923 11:37:16.595136   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:37:16.633753   47584 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 11:37:16.633843   47584 ssh_runner.go:195] Run: crio --version
	I0923 11:37:16.662027   47584 ssh_runner.go:195] Run: crio --version
	I0923 11:37:16.692384   47584 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0923 11:37:16.693688   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetIP
	I0923 11:37:16.696143   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:16.696464   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:16.696483   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:16.696684   47584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 11:37:16.700915   47584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:37:16.713495   47584 kubeadm.go:883] updating cluster {Name:test-preload-431525 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-431525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:37:16.713594   47584 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0923 11:37:16.713640   47584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:37:16.754691   47584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0923 11:37:16.754752   47584 ssh_runner.go:195] Run: which lz4
	I0923 11:37:16.758897   47584 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 11:37:16.763140   47584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 11:37:16.763168   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0923 11:37:18.318085   47584 crio.go:462] duration metric: took 1.559213186s to copy over tarball
	I0923 11:37:18.318163   47584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 11:37:20.660879   47584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.342693478s)
	I0923 11:37:20.660904   47584 crio.go:469] duration metric: took 2.342789464s to extract the tarball
	I0923 11:37:20.660912   47584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 11:37:20.701905   47584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:37:20.747215   47584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0923 11:37:20.747241   47584 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 11:37:20.747291   47584 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:37:20.747324   47584 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 11:37:20.747336   47584 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 11:37:20.747375   47584 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 11:37:20.747380   47584 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 11:37:20.747399   47584 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 11:37:20.747375   47584 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 11:37:20.747373   47584 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 11:37:20.748824   47584 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 11:37:20.748835   47584 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 11:37:20.748826   47584 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 11:37:20.748826   47584 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 11:37:20.748861   47584 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 11:37:20.748884   47584 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 11:37:20.748835   47584 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:37:20.748864   47584 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 11:37:20.901591   47584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 11:37:20.902825   47584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0923 11:37:20.905811   47584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 11:37:20.908587   47584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0923 11:37:20.924183   47584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0923 11:37:20.926860   47584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0923 11:37:20.995605   47584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 11:37:20.998139   47584 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0923 11:37:20.998175   47584 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0923 11:37:20.998235   47584 ssh_runner.go:195] Run: which crictl
	I0923 11:37:21.037191   47584 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0923 11:37:21.037238   47584 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 11:37:21.037252   47584 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0923 11:37:21.037282   47584 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 11:37:21.037319   47584 ssh_runner.go:195] Run: which crictl
	I0923 11:37:21.037286   47584 ssh_runner.go:195] Run: which crictl
	I0923 11:37:21.065314   47584 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0923 11:37:21.065361   47584 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 11:37:21.065437   47584 ssh_runner.go:195] Run: which crictl
	I0923 11:37:21.076514   47584 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0923 11:37:21.076549   47584 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 11:37:21.076567   47584 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0923 11:37:21.076591   47584 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 11:37:21.076593   47584 ssh_runner.go:195] Run: which crictl
	I0923 11:37:21.076618   47584 ssh_runner.go:195] Run: which crictl
	I0923 11:37:21.097772   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0923 11:37:21.097847   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0923 11:37:21.097886   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 11:37:21.097900   47584 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0923 11:37:21.097928   47584 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 11:37:21.097955   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0923 11:37:21.097966   47584 ssh_runner.go:195] Run: which crictl
	I0923 11:37:21.097967   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0923 11:37:21.098014   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0923 11:37:21.188602   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0923 11:37:21.231589   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0923 11:37:21.235678   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 11:37:21.235760   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0923 11:37:21.235821   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0923 11:37:21.235867   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 11:37:21.235948   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0923 11:37:21.356608   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0923 11:37:21.356717   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 11:37:21.356821   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0923 11:37:21.387917   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0923 11:37:21.387933   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0923 11:37:21.396445   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 11:37:21.396484   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0923 11:37:21.518184   47584 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0923 11:37:21.518195   47584 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 11:37:21.518302   47584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 11:37:21.518328   47584 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0923 11:37:21.518352   47584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 11:37:21.518414   47584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0923 11:37:21.544688   47584 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0923 11:37:21.544738   47584 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0923 11:37:21.544693   47584 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0923 11:37:21.544807   47584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0923 11:37:21.544837   47584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0923 11:37:21.544809   47584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0923 11:37:21.549370   47584 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0923 11:37:21.549402   47584 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 11:37:21.549441   47584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0923 11:37:21.549467   47584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 11:37:21.549482   47584 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0923 11:37:21.549442   47584 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0923 11:37:21.555500   47584 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0923 11:37:21.555546   47584 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0923 11:37:21.555569   47584 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0923 11:37:21.599805   47584 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0923 11:37:21.599919   47584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0923 11:37:22.000850   47584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:37:24.223578   47584 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.623628657s)
	I0923 11:37:24.223621   47584 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0923 11:37:24.223642   47584 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.674174977s)
	I0923 11:37:24.223649   47584 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.222766743s)
	I0923 11:37:24.223659   47584 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0923 11:37:24.223703   47584 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0923 11:37:24.223750   47584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0923 11:37:26.475524   47584 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.251742104s)
	I0923 11:37:26.475564   47584 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0923 11:37:26.475595   47584 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 11:37:26.475645   47584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0923 11:37:26.617752   47584 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0923 11:37:26.617790   47584 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0923 11:37:26.617847   47584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0923 11:37:27.060727   47584 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0923 11:37:27.060781   47584 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0923 11:37:27.060859   47584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0923 11:37:27.805512   47584 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0923 11:37:27.805647   47584 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0923 11:37:27.805749   47584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0923 11:37:28.649196   47584 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0923 11:37:28.649237   47584 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0923 11:37:28.649299   47584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0923 11:37:29.395860   47584 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0923 11:37:29.395908   47584 cache_images.go:123] Successfully loaded all cached images
	I0923 11:37:29.395920   47584 cache_images.go:92] duration metric: took 8.648666586s to LoadCachedImages
	I0923 11:37:29.395931   47584 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.24.4 crio true true} ...
	I0923 11:37:29.396041   47584 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-431525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-431525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:37:29.396105   47584 ssh_runner.go:195] Run: crio config
	I0923 11:37:29.440720   47584 cni.go:84] Creating CNI manager for ""
	I0923 11:37:29.440744   47584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 11:37:29.440757   47584 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:37:29.440774   47584 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-431525 NodeName:test-preload-431525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:37:29.440893   47584 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-431525"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:37:29.440965   47584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0923 11:37:29.451284   47584 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:37:29.451346   47584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:37:29.461310   47584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0923 11:37:29.478149   47584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:37:29.494296   47584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0923 11:37:29.510945   47584 ssh_runner.go:195] Run: grep 192.168.39.54	control-plane.minikube.internal$ /etc/hosts
	I0923 11:37:29.514643   47584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:37:29.526874   47584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:37:29.661398   47584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:37:29.679472   47584 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525 for IP: 192.168.39.54
	I0923 11:37:29.679492   47584 certs.go:194] generating shared ca certs ...
	I0923 11:37:29.679514   47584 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:37:29.679713   47584 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 11:37:29.679786   47584 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 11:37:29.679802   47584 certs.go:256] generating profile certs ...
	I0923 11:37:29.679897   47584 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/client.key
	I0923 11:37:29.679975   47584 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/apiserver.key.48039b76
	I0923 11:37:29.680023   47584 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/proxy-client.key
	I0923 11:37:29.680151   47584 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 11:37:29.680180   47584 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 11:37:29.680189   47584 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:37:29.680217   47584 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:37:29.680239   47584 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:37:29.680279   47584 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 11:37:29.680316   47584 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:37:29.681146   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:37:29.715699   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:37:29.741784   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:37:29.773775   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:37:29.799190   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0923 11:37:29.831915   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 11:37:29.864905   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:37:29.894706   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 11:37:29.918727   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:37:29.942170   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 11:37:29.965878   47584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 11:37:29.990250   47584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:37:30.007758   47584 ssh_runner.go:195] Run: openssl version
	I0923 11:37:30.013927   47584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:37:30.025076   47584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:37:30.029732   47584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:37:30.029791   47584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:37:30.035676   47584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:37:30.046499   47584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 11:37:30.057507   47584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 11:37:30.062416   47584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:37:30.062472   47584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 11:37:30.068518   47584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 11:37:30.079461   47584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 11:37:30.090299   47584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 11:37:30.094801   47584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:37:30.094846   47584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 11:37:30.100458   47584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:37:30.111092   47584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:37:30.115843   47584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:37:30.121835   47584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:37:30.127484   47584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:37:30.133354   47584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:37:30.139180   47584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:37:30.144904   47584 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:37:30.151138   47584 kubeadm.go:392] StartCluster: {Name:test-preload-431525 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-431525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:37:30.151247   47584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 11:37:30.151295   47584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:37:30.190984   47584 cri.go:89] found id: ""
	I0923 11:37:30.191070   47584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:37:30.201358   47584 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 11:37:30.201399   47584 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 11:37:30.201447   47584 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 11:37:30.210590   47584 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 11:37:30.210983   47584 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-431525" does not appear in /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:37:30.211116   47584 kubeconfig.go:62] /home/jenkins/minikube-integration/19689-3961/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-431525" cluster setting kubeconfig missing "test-preload-431525" context setting]
	I0923 11:37:30.211422   47584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/kubeconfig: {Name:mk40a9897a5577a89be748f874c2066abd769fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:37:30.212016   47584 kapi.go:59] client config for test-preload-431525: &rest.Config{Host:"https://192.168.39.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 11:37:30.212590   47584 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 11:37:30.221477   47584 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.54
	I0923 11:37:30.221502   47584 kubeadm.go:1160] stopping kube-system containers ...
	I0923 11:37:30.221512   47584 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0923 11:37:30.221553   47584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:37:30.260342   47584 cri.go:89] found id: ""
	I0923 11:37:30.260429   47584 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 11:37:30.276863   47584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:37:30.286536   47584 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:37:30.286557   47584 kubeadm.go:157] found existing configuration files:
	
	I0923 11:37:30.286609   47584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:37:30.295718   47584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:37:30.295788   47584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:37:30.305487   47584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:37:30.314403   47584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:37:30.314460   47584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:37:30.323679   47584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:37:30.332492   47584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:37:30.332553   47584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:37:30.341601   47584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:37:30.353768   47584 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:37:30.353843   47584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:37:30.364405   47584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:37:30.374452   47584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:37:30.467460   47584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:37:31.292306   47584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:37:31.547160   47584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:37:31.644623   47584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:37:31.751740   47584 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:37:31.751827   47584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:32.252426   47584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:32.752531   47584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:32.770549   47584 api_server.go:72] duration metric: took 1.018807901s to wait for apiserver process to appear ...
	I0923 11:37:32.770578   47584 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:37:32.770599   47584 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0923 11:37:32.771115   47584 api_server.go:269] stopped: https://192.168.39.54:8443/healthz: Get "https://192.168.39.54:8443/healthz": dial tcp 192.168.39.54:8443: connect: connection refused
	I0923 11:37:33.270982   47584 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0923 11:37:36.657678   47584 api_server.go:279] https://192.168.39.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 11:37:36.657703   47584 api_server.go:103] status: https://192.168.39.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 11:37:36.657721   47584 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0923 11:37:36.692899   47584 api_server.go:279] https://192.168.39.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 11:37:36.692934   47584 api_server.go:103] status: https://192.168.39.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 11:37:36.771094   47584 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0923 11:37:36.786389   47584 api_server.go:279] https://192.168.39.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 11:37:36.786426   47584 api_server.go:103] status: https://192.168.39.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 11:37:37.270912   47584 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0923 11:37:37.276957   47584 api_server.go:279] https://192.168.39.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 11:37:37.276982   47584 api_server.go:103] status: https://192.168.39.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 11:37:37.771666   47584 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0923 11:37:37.776584   47584 api_server.go:279] https://192.168.39.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 11:37:37.776618   47584 api_server.go:103] status: https://192.168.39.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 11:37:38.271311   47584 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0923 11:37:38.281768   47584 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0923 11:37:38.289075   47584 api_server.go:141] control plane version: v1.24.4
	I0923 11:37:38.289108   47584 api_server.go:131] duration metric: took 5.518521889s to wait for apiserver health ...
	I0923 11:37:38.289119   47584 cni.go:84] Creating CNI manager for ""
	I0923 11:37:38.289128   47584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 11:37:38.290971   47584 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 11:37:38.292565   47584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 11:37:38.304543   47584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 11:37:38.323063   47584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:37:38.323184   47584 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 11:37:38.323207   47584 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 11:37:38.332982   47584 system_pods.go:59] 7 kube-system pods found
	I0923 11:37:38.333022   47584 system_pods.go:61] "coredns-6d4b75cb6d-9jwgs" [f5e3ba78-532b-4290-a68f-389990db7612] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 11:37:38.333032   47584 system_pods.go:61] "etcd-test-preload-431525" [5d124bab-f432-43a4-9d8f-5b9818683af7] Running
	I0923 11:37:38.333040   47584 system_pods.go:61] "kube-apiserver-test-preload-431525" [e3a3ca6b-8000-4e62-ba9a-3c1e90e7142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 11:37:38.333047   47584 system_pods.go:61] "kube-controller-manager-test-preload-431525" [8303326c-d92f-4bf1-a9bf-3f08a53b1922] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 11:37:38.333069   47584 system_pods.go:61] "kube-proxy-j82pd" [22b3f621-1600-43b7-882c-9bfaf603d40a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0923 11:37:38.333075   47584 system_pods.go:61] "kube-scheduler-test-preload-431525" [563e81f6-fe3b-4bd1-9c98-91dbf5538e49] Running
	I0923 11:37:38.333083   47584 system_pods.go:61] "storage-provisioner" [7e961cf1-dd3a-42b6-a7de-b16d4f5d4865] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 11:37:38.333099   47584 system_pods.go:74] duration metric: took 10.007201ms to wait for pod list to return data ...
	I0923 11:37:38.333118   47584 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:37:38.336847   47584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 11:37:38.336875   47584 node_conditions.go:123] node cpu capacity is 2
	I0923 11:37:38.336887   47584 node_conditions.go:105] duration metric: took 3.763251ms to run NodePressure ...
	I0923 11:37:38.336912   47584 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:37:38.516376   47584 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0923 11:37:38.520534   47584 kubeadm.go:739] kubelet initialised
	I0923 11:37:38.520555   47584 kubeadm.go:740] duration metric: took 4.153063ms waiting for restarted kubelet to initialise ...
	I0923 11:37:38.520565   47584 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:38.525900   47584 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9jwgs" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:38.530655   47584 pod_ready.go:98] node "test-preload-431525" hosting pod "coredns-6d4b75cb6d-9jwgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:38.530676   47584 pod_ready.go:82] duration metric: took 4.750324ms for pod "coredns-6d4b75cb6d-9jwgs" in "kube-system" namespace to be "Ready" ...
	E0923 11:37:38.530686   47584 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-431525" hosting pod "coredns-6d4b75cb6d-9jwgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:38.530693   47584 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:38.534669   47584 pod_ready.go:98] node "test-preload-431525" hosting pod "etcd-test-preload-431525" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:38.534687   47584 pod_ready.go:82] duration metric: took 3.981397ms for pod "etcd-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	E0923 11:37:38.534696   47584 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-431525" hosting pod "etcd-test-preload-431525" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:38.534703   47584 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:38.538984   47584 pod_ready.go:98] node "test-preload-431525" hosting pod "kube-apiserver-test-preload-431525" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:38.539005   47584 pod_ready.go:82] duration metric: took 4.289444ms for pod "kube-apiserver-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	E0923 11:37:38.539015   47584 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-431525" hosting pod "kube-apiserver-test-preload-431525" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:38.539025   47584 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:38.727455   47584 pod_ready.go:98] node "test-preload-431525" hosting pod "kube-controller-manager-test-preload-431525" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:38.727479   47584 pod_ready.go:82] duration metric: took 188.441146ms for pod "kube-controller-manager-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	E0923 11:37:38.727488   47584 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-431525" hosting pod "kube-controller-manager-test-preload-431525" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:38.727494   47584 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j82pd" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:39.126789   47584 pod_ready.go:98] node "test-preload-431525" hosting pod "kube-proxy-j82pd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:39.126814   47584 pod_ready.go:82] duration metric: took 399.311129ms for pod "kube-proxy-j82pd" in "kube-system" namespace to be "Ready" ...
	E0923 11:37:39.126823   47584 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-431525" hosting pod "kube-proxy-j82pd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:39.126829   47584 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:39.527555   47584 pod_ready.go:98] node "test-preload-431525" hosting pod "kube-scheduler-test-preload-431525" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:39.527587   47584 pod_ready.go:82] duration metric: took 400.750953ms for pod "kube-scheduler-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	E0923 11:37:39.527595   47584 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-431525" hosting pod "kube-scheduler-test-preload-431525" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:39.527602   47584 pod_ready.go:39] duration metric: took 1.007027055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:39.527619   47584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:37:39.539722   47584 ops.go:34] apiserver oom_adj: -16
	I0923 11:37:39.539746   47584 kubeadm.go:597] duration metric: took 9.338340504s to restartPrimaryControlPlane
	I0923 11:37:39.539757   47584 kubeadm.go:394] duration metric: took 9.388625397s to StartCluster
	I0923 11:37:39.539776   47584 settings.go:142] acquiring lock: {Name:mka0fc37129eef8f35af2c1a6ddc567156410b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:37:39.539846   47584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:37:39.540692   47584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/kubeconfig: {Name:mk40a9897a5577a89be748f874c2066abd769fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:37:39.540949   47584 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 11:37:39.541014   47584 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 11:37:39.541121   47584 addons.go:69] Setting storage-provisioner=true in profile "test-preload-431525"
	I0923 11:37:39.541143   47584 addons.go:234] Setting addon storage-provisioner=true in "test-preload-431525"
	I0923 11:37:39.541148   47584 addons.go:69] Setting default-storageclass=true in profile "test-preload-431525"
	I0923 11:37:39.541198   47584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-431525"
	I0923 11:37:39.541216   47584 config.go:182] Loaded profile config "test-preload-431525": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	W0923 11:37:39.541154   47584 addons.go:243] addon storage-provisioner should already be in state true
	I0923 11:37:39.541305   47584 host.go:66] Checking if "test-preload-431525" exists ...
	I0923 11:37:39.541612   47584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:37:39.541663   47584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:37:39.541678   47584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:37:39.541739   47584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:37:39.542925   47584 out.go:177] * Verifying Kubernetes components...
	I0923 11:37:39.544533   47584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:37:39.556841   47584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39645
	I0923 11:37:39.556877   47584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
	I0923 11:37:39.557279   47584 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:37:39.557322   47584 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:37:39.557751   47584 main.go:141] libmachine: Using API Version  1
	I0923 11:37:39.557765   47584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:37:39.557910   47584 main.go:141] libmachine: Using API Version  1
	I0923 11:37:39.557941   47584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:37:39.558114   47584 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:37:39.558229   47584 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:37:39.558421   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetState
	I0923 11:37:39.558644   47584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:37:39.558684   47584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:37:39.560481   47584 kapi.go:59] client config for test-preload-431525: &rest.Config{Host:"https://192.168.39.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/client.crt", KeyFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/profiles/test-preload-431525/client.key", CAFile:"/home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 11:37:39.560729   47584 addons.go:234] Setting addon default-storageclass=true in "test-preload-431525"
	W0923 11:37:39.560744   47584 addons.go:243] addon default-storageclass should already be in state true
	I0923 11:37:39.560779   47584 host.go:66] Checking if "test-preload-431525" exists ...
	I0923 11:37:39.561025   47584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:37:39.561063   47584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:37:39.573177   47584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0923 11:37:39.573746   47584 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:37:39.574273   47584 main.go:141] libmachine: Using API Version  1
	I0923 11:37:39.574314   47584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:37:39.574680   47584 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:37:39.574864   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetState
	I0923 11:37:39.575125   47584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0923 11:37:39.575562   47584 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:37:39.576097   47584 main.go:141] libmachine: Using API Version  1
	I0923 11:37:39.576130   47584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:37:39.576482   47584 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:37:39.576607   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:37:39.576986   47584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:37:39.577025   47584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:37:39.578705   47584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:37:39.580056   47584 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:37:39.580073   47584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:37:39.580090   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:39.582647   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:39.582997   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:39.583046   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:39.583177   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:39.583343   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:39.583496   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:39.583671   47584 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/test-preload-431525/id_rsa Username:docker}
	I0923 11:37:39.612056   47584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I0923 11:37:39.612488   47584 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:37:39.613014   47584 main.go:141] libmachine: Using API Version  1
	I0923 11:37:39.613036   47584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:37:39.613415   47584 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:37:39.613631   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetState
	I0923 11:37:39.615239   47584 main.go:141] libmachine: (test-preload-431525) Calling .DriverName
	I0923 11:37:39.615467   47584 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:37:39.615485   47584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:37:39.615511   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHHostname
	I0923 11:37:39.618488   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:39.618986   47584 main.go:141] libmachine: (test-preload-431525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e5:df", ip: ""} in network mk-test-preload-431525: {Iface:virbr1 ExpiryTime:2024-09-23 12:37:07 +0000 UTC Type:0 Mac:52:54:00:74:e5:df Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:test-preload-431525 Clientid:01:52:54:00:74:e5:df}
	I0923 11:37:39.619011   47584 main.go:141] libmachine: (test-preload-431525) DBG | domain test-preload-431525 has defined IP address 192.168.39.54 and MAC address 52:54:00:74:e5:df in network mk-test-preload-431525
	I0923 11:37:39.619214   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHPort
	I0923 11:37:39.619383   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHKeyPath
	I0923 11:37:39.619559   47584 main.go:141] libmachine: (test-preload-431525) Calling .GetSSHUsername
	I0923 11:37:39.619708   47584 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/test-preload-431525/id_rsa Username:docker}
	I0923 11:37:39.718367   47584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:37:39.736742   47584 node_ready.go:35] waiting up to 6m0s for node "test-preload-431525" to be "Ready" ...
	I0923 11:37:39.808636   47584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:37:39.899086   47584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:37:40.751095   47584 main.go:141] libmachine: Making call to close driver server
	I0923 11:37:40.751126   47584 main.go:141] libmachine: (test-preload-431525) Calling .Close
	I0923 11:37:40.751494   47584 main.go:141] libmachine: (test-preload-431525) DBG | Closing plugin on server side
	I0923 11:37:40.751495   47584 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:37:40.751520   47584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:37:40.751530   47584 main.go:141] libmachine: Making call to close driver server
	I0923 11:37:40.751541   47584 main.go:141] libmachine: (test-preload-431525) Calling .Close
	I0923 11:37:40.751798   47584 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:37:40.751830   47584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:37:40.751855   47584 main.go:141] libmachine: (test-preload-431525) DBG | Closing plugin on server side
	I0923 11:37:40.770954   47584 main.go:141] libmachine: Making call to close driver server
	I0923 11:37:40.770975   47584 main.go:141] libmachine: (test-preload-431525) Calling .Close
	I0923 11:37:40.771278   47584 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:37:40.771300   47584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:37:40.790562   47584 main.go:141] libmachine: Making call to close driver server
	I0923 11:37:40.790588   47584 main.go:141] libmachine: (test-preload-431525) Calling .Close
	I0923 11:37:40.790919   47584 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:37:40.790939   47584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:37:40.790944   47584 main.go:141] libmachine: (test-preload-431525) DBG | Closing plugin on server side
	I0923 11:37:40.790948   47584 main.go:141] libmachine: Making call to close driver server
	I0923 11:37:40.790993   47584 main.go:141] libmachine: (test-preload-431525) Calling .Close
	I0923 11:37:40.791256   47584 main.go:141] libmachine: (test-preload-431525) DBG | Closing plugin on server side
	I0923 11:37:40.791269   47584 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:37:40.791289   47584 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:37:40.793442   47584 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0923 11:37:40.794672   47584 addons.go:510] duration metric: took 1.253663467s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0923 11:37:41.741800   47584 node_ready.go:53] node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:44.241150   47584 node_ready.go:53] node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:46.740899   47584 node_ready.go:53] node "test-preload-431525" has status "Ready":"False"
	I0923 11:37:47.240652   47584 node_ready.go:49] node "test-preload-431525" has status "Ready":"True"
	I0923 11:37:47.240675   47584 node_ready.go:38] duration metric: took 7.503904703s for node "test-preload-431525" to be "Ready" ...
	I0923 11:37:47.240684   47584 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:47.246706   47584 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9jwgs" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:47.252008   47584 pod_ready.go:93] pod "coredns-6d4b75cb6d-9jwgs" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:47.252026   47584 pod_ready.go:82] duration metric: took 5.292954ms for pod "coredns-6d4b75cb6d-9jwgs" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:47.252033   47584 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:49.259926   47584 pod_ready.go:103] pod "etcd-test-preload-431525" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:51.758351   47584 pod_ready.go:103] pod "etcd-test-preload-431525" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:52.258679   47584 pod_ready.go:93] pod "etcd-test-preload-431525" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:52.258712   47584 pod_ready.go:82] duration metric: took 5.00667124s for pod "etcd-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.258725   47584 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.264121   47584 pod_ready.go:93] pod "kube-apiserver-test-preload-431525" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:52.264143   47584 pod_ready.go:82] duration metric: took 5.410453ms for pod "kube-apiserver-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.264155   47584 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.268502   47584 pod_ready.go:93] pod "kube-controller-manager-test-preload-431525" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:52.268523   47584 pod_ready.go:82] duration metric: took 4.360383ms for pod "kube-controller-manager-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.268534   47584 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j82pd" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.272835   47584 pod_ready.go:93] pod "kube-proxy-j82pd" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:52.272856   47584 pod_ready.go:82] duration metric: took 4.314359ms for pod "kube-proxy-j82pd" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.272865   47584 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.277054   47584 pod_ready.go:93] pod "kube-scheduler-test-preload-431525" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:52.277072   47584 pod_ready.go:82] duration metric: took 4.199822ms for pod "kube-scheduler-test-preload-431525" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:52.277080   47584 pod_ready.go:39] duration metric: took 5.036387912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:52.277092   47584 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:37:52.277146   47584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:52.292578   47584 api_server.go:72] duration metric: took 12.751594075s to wait for apiserver process to appear ...
	I0923 11:37:52.292605   47584 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:37:52.292627   47584 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0923 11:37:52.298011   47584 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0923 11:37:52.299122   47584 api_server.go:141] control plane version: v1.24.4
	I0923 11:37:52.299143   47584 api_server.go:131] duration metric: took 6.531505ms to wait for apiserver health ...
	I0923 11:37:52.299150   47584 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:37:52.459294   47584 system_pods.go:59] 7 kube-system pods found
	I0923 11:37:52.459320   47584 system_pods.go:61] "coredns-6d4b75cb6d-9jwgs" [f5e3ba78-532b-4290-a68f-389990db7612] Running
	I0923 11:37:52.459325   47584 system_pods.go:61] "etcd-test-preload-431525" [5d124bab-f432-43a4-9d8f-5b9818683af7] Running
	I0923 11:37:52.459335   47584 system_pods.go:61] "kube-apiserver-test-preload-431525" [e3a3ca6b-8000-4e62-ba9a-3c1e90e7142e] Running
	I0923 11:37:52.459340   47584 system_pods.go:61] "kube-controller-manager-test-preload-431525" [8303326c-d92f-4bf1-a9bf-3f08a53b1922] Running
	I0923 11:37:52.459344   47584 system_pods.go:61] "kube-proxy-j82pd" [22b3f621-1600-43b7-882c-9bfaf603d40a] Running
	I0923 11:37:52.459347   47584 system_pods.go:61] "kube-scheduler-test-preload-431525" [563e81f6-fe3b-4bd1-9c98-91dbf5538e49] Running
	I0923 11:37:52.459350   47584 system_pods.go:61] "storage-provisioner" [7e961cf1-dd3a-42b6-a7de-b16d4f5d4865] Running
	I0923 11:37:52.459355   47584 system_pods.go:74] duration metric: took 160.199986ms to wait for pod list to return data ...
	I0923 11:37:52.459362   47584 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:37:52.655384   47584 default_sa.go:45] found service account: "default"
	I0923 11:37:52.655411   47584 default_sa.go:55] duration metric: took 196.043213ms for default service account to be created ...
	I0923 11:37:52.655421   47584 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:37:52.858436   47584 system_pods.go:86] 7 kube-system pods found
	I0923 11:37:52.858469   47584 system_pods.go:89] "coredns-6d4b75cb6d-9jwgs" [f5e3ba78-532b-4290-a68f-389990db7612] Running
	I0923 11:37:52.858474   47584 system_pods.go:89] "etcd-test-preload-431525" [5d124bab-f432-43a4-9d8f-5b9818683af7] Running
	I0923 11:37:52.858478   47584 system_pods.go:89] "kube-apiserver-test-preload-431525" [e3a3ca6b-8000-4e62-ba9a-3c1e90e7142e] Running
	I0923 11:37:52.858487   47584 system_pods.go:89] "kube-controller-manager-test-preload-431525" [8303326c-d92f-4bf1-a9bf-3f08a53b1922] Running
	I0923 11:37:52.858490   47584 system_pods.go:89] "kube-proxy-j82pd" [22b3f621-1600-43b7-882c-9bfaf603d40a] Running
	I0923 11:37:52.858498   47584 system_pods.go:89] "kube-scheduler-test-preload-431525" [563e81f6-fe3b-4bd1-9c98-91dbf5538e49] Running
	I0923 11:37:52.858501   47584 system_pods.go:89] "storage-provisioner" [7e961cf1-dd3a-42b6-a7de-b16d4f5d4865] Running
	I0923 11:37:52.858507   47584 system_pods.go:126] duration metric: took 203.081086ms to wait for k8s-apps to be running ...
	I0923 11:37:52.858514   47584 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:37:52.858557   47584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:37:52.875328   47584 system_svc.go:56] duration metric: took 16.802845ms WaitForService to wait for kubelet
	I0923 11:37:52.875360   47584 kubeadm.go:582] duration metric: took 13.334380173s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:37:52.875382   47584 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:37:53.056719   47584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 11:37:53.056744   47584 node_conditions.go:123] node cpu capacity is 2
	I0923 11:37:53.056754   47584 node_conditions.go:105] duration metric: took 181.367164ms to run NodePressure ...
	I0923 11:37:53.056764   47584 start.go:241] waiting for startup goroutines ...
	I0923 11:37:53.056771   47584 start.go:246] waiting for cluster config update ...
	I0923 11:37:53.056780   47584 start.go:255] writing updated cluster config ...
	I0923 11:37:53.057036   47584 ssh_runner.go:195] Run: rm -f paused
	I0923 11:37:53.103143   47584 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0923 11:37:53.104788   47584 out.go:201] 
	W0923 11:37:53.105953   47584 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0923 11:37:53.106950   47584 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0923 11:37:53.108211   47584 out.go:177] * Done! kubectl is now configured to use "test-preload-431525" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 11:37:53 test-preload-431525 crio[659]: time="2024-09-23 11:37:53.980335885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091473980312242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ba478d9-1993-4a16-897a-bcc3867bb91a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:37:53 test-preload-431525 crio[659]: time="2024-09-23 11:37:53.980908354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da502187-8dd8-4315-8a18-c8b71f3e7065 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:53 test-preload-431525 crio[659]: time="2024-09-23 11:37:53.980958155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da502187-8dd8-4315-8a18-c8b71f3e7065 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:53 test-preload-431525 crio[659]: time="2024-09-23 11:37:53.981114869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e246afaf7c6fb8bac11d3db7fce96b681e92e08bc0af70fb9f316a3e5087b131,PodSandboxId:56392075df7717fae7b7045f4701ed059914fe10238b453dda6351523a945e1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727091464826368496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9jwgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e3ba78-532b-4290-a68f-389990db7612,},Annotations:map[string]string{io.kubernetes.container.hash: 7a01aee6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59bd71c4676fdd6fccf6edf3e13ea69a2f6cb58616ee009964301d9040d0d2ea,PodSandboxId:4db3b02f4cd5097ad5acc303bbb1be3d313b208fd3e47e4f4e738b58e7c044fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727091457436123480,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7e961cf1-dd3a-42b6-a7de-b16d4f5d4865,},Annotations:map[string]string{io.kubernetes.container.hash: 7235038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:411dddc45e1c029c17c1ce3f81cc016fb5049b84298e2fd7ad920fbe1e707ff8,PodSandboxId:ff101e3f644b80b9f12845faf83a6a70a628bb39ee5df80d9dbc4a5efcfac39e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727091457126500175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j82pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b
3f621-1600-43b7-882c-9bfaf603d40a,},Annotations:map[string]string{io.kubernetes.container.hash: 18e1a3f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43ca0482b5a14865b1604bdb6f609ef6b0155b4183e3c17a12f7475d953ebcb,PodSandboxId:59e328dc8c28f19dba1903361c12723d5bfcf7fe50568eff7b740131ca1ef890,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727091452473275431,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c87895a0f7fd017d33d3d5b75c42509,},Annot
ations:map[string]string{io.kubernetes.container.hash: d9badf33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc90dbb1096a2c9ed26e923d6308e36cfb141953d6b280bac2c2c6c4a762d25,PodSandboxId:27f5b188bc6fc14e5f505420ab888c18dd1e28f9f2e366f1d61b92287131b3a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727091452404573809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771395842b09d9012ea119bed73bd09e,},Annotations:map[
string]string{io.kubernetes.container.hash: b3dabe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f3daa7a23af28aa2b1cc7c620ff5aacbcf15c1cdee91cc9dab424c65e18aab,PodSandboxId:e8f142b6abc078affd65f92e2d3e0fe0b103a90d9db86fbeaea193eb3529b86c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727091452384620649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b285fa53890c993b893c0fb602f522,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dafe5459e3b4321dda387f21923217b9f4489f84ef2d541fbf31cab1cb9772,PodSandboxId:8d196372e930f0b07f1ab06d854719e1b82d7edfec3153b33840d7a3d6281e9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727091452390767336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b0657880c348571f296b81a0b3f87b,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da502187-8dd8-4315-8a18-c8b71f3e7065 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.018294881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f3e3cd3-ef49-4cad-984b-f00f07658651 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.018366050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f3e3cd3-ef49-4cad-984b-f00f07658651 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.021011521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0ddb802-47b0-4d85-b7d4-b2bbf3b1535b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.022749214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091474022722453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0ddb802-47b0-4d85-b7d4-b2bbf3b1535b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.023981993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcc7fabc-4a7d-4460-b4fa-53dd2751ad0f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.024033723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcc7fabc-4a7d-4460-b4fa-53dd2751ad0f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.024232644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e246afaf7c6fb8bac11d3db7fce96b681e92e08bc0af70fb9f316a3e5087b131,PodSandboxId:56392075df7717fae7b7045f4701ed059914fe10238b453dda6351523a945e1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727091464826368496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9jwgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e3ba78-532b-4290-a68f-389990db7612,},Annotations:map[string]string{io.kubernetes.container.hash: 7a01aee6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59bd71c4676fdd6fccf6edf3e13ea69a2f6cb58616ee009964301d9040d0d2ea,PodSandboxId:4db3b02f4cd5097ad5acc303bbb1be3d313b208fd3e47e4f4e738b58e7c044fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727091457436123480,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7e961cf1-dd3a-42b6-a7de-b16d4f5d4865,},Annotations:map[string]string{io.kubernetes.container.hash: 7235038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:411dddc45e1c029c17c1ce3f81cc016fb5049b84298e2fd7ad920fbe1e707ff8,PodSandboxId:ff101e3f644b80b9f12845faf83a6a70a628bb39ee5df80d9dbc4a5efcfac39e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727091457126500175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j82pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b
3f621-1600-43b7-882c-9bfaf603d40a,},Annotations:map[string]string{io.kubernetes.container.hash: 18e1a3f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43ca0482b5a14865b1604bdb6f609ef6b0155b4183e3c17a12f7475d953ebcb,PodSandboxId:59e328dc8c28f19dba1903361c12723d5bfcf7fe50568eff7b740131ca1ef890,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727091452473275431,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c87895a0f7fd017d33d3d5b75c42509,},Annot
ations:map[string]string{io.kubernetes.container.hash: d9badf33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc90dbb1096a2c9ed26e923d6308e36cfb141953d6b280bac2c2c6c4a762d25,PodSandboxId:27f5b188bc6fc14e5f505420ab888c18dd1e28f9f2e366f1d61b92287131b3a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727091452404573809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771395842b09d9012ea119bed73bd09e,},Annotations:map[
string]string{io.kubernetes.container.hash: b3dabe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f3daa7a23af28aa2b1cc7c620ff5aacbcf15c1cdee91cc9dab424c65e18aab,PodSandboxId:e8f142b6abc078affd65f92e2d3e0fe0b103a90d9db86fbeaea193eb3529b86c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727091452384620649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b285fa53890c993b893c0fb602f522,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dafe5459e3b4321dda387f21923217b9f4489f84ef2d541fbf31cab1cb9772,PodSandboxId:8d196372e930f0b07f1ab06d854719e1b82d7edfec3153b33840d7a3d6281e9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727091452390767336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b0657880c348571f296b81a0b3f87b,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcc7fabc-4a7d-4460-b4fa-53dd2751ad0f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.068702246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a024088-359c-429f-bef4-6b36a1fd495c name=/runtime.v1.RuntimeService/Version
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.068775236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a024088-359c-429f-bef4-6b36a1fd495c name=/runtime.v1.RuntimeService/Version
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.070278539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55722625-7cf2-429a-989e-4b14c9667f9c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.070748795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091474070714911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55722625-7cf2-429a-989e-4b14c9667f9c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.071411866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdd7f208-dfc4-448d-baaa-daf55c8e6657 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.071489664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdd7f208-dfc4-448d-baaa-daf55c8e6657 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.071685876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e246afaf7c6fb8bac11d3db7fce96b681e92e08bc0af70fb9f316a3e5087b131,PodSandboxId:56392075df7717fae7b7045f4701ed059914fe10238b453dda6351523a945e1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727091464826368496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9jwgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e3ba78-532b-4290-a68f-389990db7612,},Annotations:map[string]string{io.kubernetes.container.hash: 7a01aee6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59bd71c4676fdd6fccf6edf3e13ea69a2f6cb58616ee009964301d9040d0d2ea,PodSandboxId:4db3b02f4cd5097ad5acc303bbb1be3d313b208fd3e47e4f4e738b58e7c044fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727091457436123480,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7e961cf1-dd3a-42b6-a7de-b16d4f5d4865,},Annotations:map[string]string{io.kubernetes.container.hash: 7235038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:411dddc45e1c029c17c1ce3f81cc016fb5049b84298e2fd7ad920fbe1e707ff8,PodSandboxId:ff101e3f644b80b9f12845faf83a6a70a628bb39ee5df80d9dbc4a5efcfac39e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727091457126500175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j82pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b
3f621-1600-43b7-882c-9bfaf603d40a,},Annotations:map[string]string{io.kubernetes.container.hash: 18e1a3f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43ca0482b5a14865b1604bdb6f609ef6b0155b4183e3c17a12f7475d953ebcb,PodSandboxId:59e328dc8c28f19dba1903361c12723d5bfcf7fe50568eff7b740131ca1ef890,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727091452473275431,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c87895a0f7fd017d33d3d5b75c42509,},Annot
ations:map[string]string{io.kubernetes.container.hash: d9badf33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc90dbb1096a2c9ed26e923d6308e36cfb141953d6b280bac2c2c6c4a762d25,PodSandboxId:27f5b188bc6fc14e5f505420ab888c18dd1e28f9f2e366f1d61b92287131b3a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727091452404573809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771395842b09d9012ea119bed73bd09e,},Annotations:map[
string]string{io.kubernetes.container.hash: b3dabe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f3daa7a23af28aa2b1cc7c620ff5aacbcf15c1cdee91cc9dab424c65e18aab,PodSandboxId:e8f142b6abc078affd65f92e2d3e0fe0b103a90d9db86fbeaea193eb3529b86c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727091452384620649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b285fa53890c993b893c0fb602f522,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dafe5459e3b4321dda387f21923217b9f4489f84ef2d541fbf31cab1cb9772,PodSandboxId:8d196372e930f0b07f1ab06d854719e1b82d7edfec3153b33840d7a3d6281e9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727091452390767336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b0657880c348571f296b81a0b3f87b,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdd7f208-dfc4-448d-baaa-daf55c8e6657 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.108707354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a7e2541-1f84-44a2-aafa-2cb4a88e6ede name=/runtime.v1.RuntimeService/Version
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.108942531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a7e2541-1f84-44a2-aafa-2cb4a88e6ede name=/runtime.v1.RuntimeService/Version
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.110381928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8c68d39-69bc-4fb9-b4c9-afa3d7f3fea6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.110826298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727091474110802952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8c68d39-69bc-4fb9-b4c9-afa3d7f3fea6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.111344625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2df6d89-0288-448d-92b7-e208fe2c0e58 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.111417642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2df6d89-0288-448d-92b7-e208fe2c0e58 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:37:54 test-preload-431525 crio[659]: time="2024-09-23 11:37:54.111575923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e246afaf7c6fb8bac11d3db7fce96b681e92e08bc0af70fb9f316a3e5087b131,PodSandboxId:56392075df7717fae7b7045f4701ed059914fe10238b453dda6351523a945e1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727091464826368496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9jwgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e3ba78-532b-4290-a68f-389990db7612,},Annotations:map[string]string{io.kubernetes.container.hash: 7a01aee6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59bd71c4676fdd6fccf6edf3e13ea69a2f6cb58616ee009964301d9040d0d2ea,PodSandboxId:4db3b02f4cd5097ad5acc303bbb1be3d313b208fd3e47e4f4e738b58e7c044fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727091457436123480,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7e961cf1-dd3a-42b6-a7de-b16d4f5d4865,},Annotations:map[string]string{io.kubernetes.container.hash: 7235038,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:411dddc45e1c029c17c1ce3f81cc016fb5049b84298e2fd7ad920fbe1e707ff8,PodSandboxId:ff101e3f644b80b9f12845faf83a6a70a628bb39ee5df80d9dbc4a5efcfac39e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727091457126500175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j82pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b
3f621-1600-43b7-882c-9bfaf603d40a,},Annotations:map[string]string{io.kubernetes.container.hash: 18e1a3f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43ca0482b5a14865b1604bdb6f609ef6b0155b4183e3c17a12f7475d953ebcb,PodSandboxId:59e328dc8c28f19dba1903361c12723d5bfcf7fe50568eff7b740131ca1ef890,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727091452473275431,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c87895a0f7fd017d33d3d5b75c42509,},Annot
ations:map[string]string{io.kubernetes.container.hash: d9badf33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc90dbb1096a2c9ed26e923d6308e36cfb141953d6b280bac2c2c6c4a762d25,PodSandboxId:27f5b188bc6fc14e5f505420ab888c18dd1e28f9f2e366f1d61b92287131b3a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727091452404573809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 771395842b09d9012ea119bed73bd09e,},Annotations:map[
string]string{io.kubernetes.container.hash: b3dabe53,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f3daa7a23af28aa2b1cc7c620ff5aacbcf15c1cdee91cc9dab424c65e18aab,PodSandboxId:e8f142b6abc078affd65f92e2d3e0fe0b103a90d9db86fbeaea193eb3529b86c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727091452384620649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b285fa53890c993b893c0fb602f522,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dafe5459e3b4321dda387f21923217b9f4489f84ef2d541fbf31cab1cb9772,PodSandboxId:8d196372e930f0b07f1ab06d854719e1b82d7edfec3153b33840d7a3d6281e9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727091452390767336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-431525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94b0657880c348571f296b81a0b3f87b,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2df6d89-0288-448d-92b7-e208fe2c0e58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e246afaf7c6fb       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   56392075df771       coredns-6d4b75cb6d-9jwgs
	59bd71c4676fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   4db3b02f4cd50       storage-provisioner
	411dddc45e1c0       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   17 seconds ago      Running             kube-proxy                1                   ff101e3f644b8       kube-proxy-j82pd
	f43ca0482b5a1       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   59e328dc8c28f       etcd-test-preload-431525
	5bc90dbb1096a       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   27f5b188bc6fc       kube-apiserver-test-preload-431525
	63dafe5459e3b       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   8d196372e930f       kube-controller-manager-test-preload-431525
	98f3daa7a23af       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   e8f142b6abc07       kube-scheduler-test-preload-431525
	
	
	==> coredns [e246afaf7c6fb8bac11d3db7fce96b681e92e08bc0af70fb9f316a3e5087b131] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:56797 - 43775 "HINFO IN 6958128697177729734.3211665586128518054. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015782947s
	
	
	==> describe nodes <==
	Name:               test-preload-431525
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-431525
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=test-preload-431525
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_36_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:36:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-431525
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:37:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:37:47 +0000   Mon, 23 Sep 2024 11:36:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:37:47 +0000   Mon, 23 Sep 2024 11:36:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:37:47 +0000   Mon, 23 Sep 2024 11:36:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:37:47 +0000   Mon, 23 Sep 2024 11:37:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    test-preload-431525
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b57676c97ae849e89d56523be44bac2f
	  System UUID:                b57676c9-7ae8-49e8-9d56-523be44bac2f
	  Boot ID:                    cb75d6b5-1141-4b94-a9e4-0a90cf939a0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9jwgs                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     87s
	  kube-system                 etcd-test-preload-431525                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         100s
	  kube-system                 kube-apiserver-test-preload-431525             250m (12%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-controller-manager-test-preload-431525    200m (10%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-proxy-j82pd                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-test-preload-431525             100m (5%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  108s (x4 over 108s)  kubelet          Node test-preload-431525 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x4 over 108s)  kubelet          Node test-preload-431525 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x4 over 108s)  kubelet          Node test-preload-431525 status is now: NodeHasSufficientPID
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s                 kubelet          Node test-preload-431525 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s                 kubelet          Node test-preload-431525 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s                 kubelet          Node test-preload-431525 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                90s                  kubelet          Node test-preload-431525 status is now: NodeReady
	  Normal  RegisteredNode           88s                  node-controller  Node test-preload-431525 event: Registered Node test-preload-431525 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node test-preload-431525 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node test-preload-431525 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node test-preload-431525 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node test-preload-431525 event: Registered Node test-preload-431525 in Controller
	
	
	==> dmesg <==
	[Sep23 11:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050561] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040933] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep23 11:37] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.594288] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.580184] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.760538] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.061227] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055869] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.178039] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.133621] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.275545] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[ +13.156783] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.061982] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.812616] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
	[  +5.627383] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.521641] systemd-fstab-generator[1739]: Ignoring "noauto" option for root device
	[  +5.000766] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [f43ca0482b5a14865b1604bdb6f609ef6b0155b4183e3c17a12f7475d953ebcb] <==
	{"level":"info","ts":"2024-09-23T11:37:33.023Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"731f5c40d4af6217","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-23T11:37:33.026Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T11:37:33.029Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"731f5c40d4af6217","initial-advertise-peer-urls":["https://192.168.39.54:2380"],"listen-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T11:37:33.029Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T11:37:33.029Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-23T11:37:33.032Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-09-23T11:37:33.032Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-09-23T11:37:33.033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 switched to configuration voters=(8295450472155669015)"}
	{"level":"info","ts":"2024-09-23T11:37:33.037Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ad335f297da439ca","local-member-id":"731f5c40d4af6217","added-peer-id":"731f5c40d4af6217","added-peer-peer-urls":["https://192.168.39.54:2380"]}
	{"level":"info","ts":"2024-09-23T11:37:33.037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ad335f297da439ca","local-member-id":"731f5c40d4af6217","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:37:33.037Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:37:34.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T11:37:34.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:37:34.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 received MsgPreVoteResp from 731f5c40d4af6217 at term 2"}
	{"level":"info","ts":"2024-09-23T11:37:34.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T11:37:34.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 received MsgVoteResp from 731f5c40d4af6217 at term 3"}
	{"level":"info","ts":"2024-09-23T11:37:34.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T11:37:34.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 731f5c40d4af6217 elected leader 731f5c40d4af6217 at term 3"}
	{"level":"info","ts":"2024-09-23T11:37:34.180Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"731f5c40d4af6217","local-member-attributes":"{Name:test-preload-431525 ClientURLs:[https://192.168.39.54:2379]}","request-path":"/0/members/731f5c40d4af6217/attributes","cluster-id":"ad335f297da439ca","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:37:34.180Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:37:34.181Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:37:34.183Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:37:34.185Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.54:2379"}
	{"level":"info","ts":"2024-09-23T11:37:34.185Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:37:34.185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:37:54 up 0 min,  0 users,  load average: 0.75, 0.20, 0.07
	Linux test-preload-431525 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5bc90dbb1096a2c9ed26e923d6308e36cfb141953d6b280bac2c2c6c4a762d25] <==
	I0923 11:37:36.621270       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0923 11:37:36.621306       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0923 11:37:36.621324       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0923 11:37:36.621598       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 11:37:36.636674       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 11:37:36.676135       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0923 11:37:36.676308       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0923 11:37:36.730550       1 cache.go:39] Caches are synced for autoregister controller
	I0923 11:37:36.777213       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0923 11:37:36.781629       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:37:36.783559       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0923 11:37:36.807981       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0923 11:37:36.808790       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0923 11:37:36.809066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:37:36.832841       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0923 11:37:37.296625       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0923 11:37:37.372665       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0923 11:37:37.601278       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 11:37:38.420953       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0923 11:37:38.442228       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0923 11:37:38.483044       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0923 11:37:38.496787       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 11:37:38.502305       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 11:37:49.092079       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 11:37:49.092469       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [63dafe5459e3b4321dda387f21923217b9f4489f84ef2d541fbf31cab1cb9772] <==
	I0923 11:37:49.069354       1 shared_informer.go:262] Caches are synced for PV protection
	I0923 11:37:49.072634       1 shared_informer.go:262] Caches are synced for daemon sets
	I0923 11:37:49.072726       1 shared_informer.go:262] Caches are synced for taint
	I0923 11:37:49.072853       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0923 11:37:49.073015       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-431525. Assuming now as a timestamp.
	I0923 11:37:49.073083       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0923 11:37:49.073687       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0923 11:37:49.074095       1 event.go:294] "Event occurred" object="test-preload-431525" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-431525 event: Registered Node test-preload-431525 in Controller"
	I0923 11:37:49.076274       1 shared_informer.go:262] Caches are synced for job
	I0923 11:37:49.078239       1 shared_informer.go:262] Caches are synced for endpoint
	I0923 11:37:49.078847       1 shared_informer.go:262] Caches are synced for persistent volume
	I0923 11:37:49.081323       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0923 11:37:49.083322       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0923 11:37:49.084229       1 shared_informer.go:262] Caches are synced for PVC protection
	I0923 11:37:49.087228       1 shared_informer.go:262] Caches are synced for GC
	I0923 11:37:49.087237       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0923 11:37:49.093533       1 shared_informer.go:262] Caches are synced for disruption
	I0923 11:37:49.093581       1 disruption.go:371] Sending events to api server.
	I0923 11:37:49.168188       1 shared_informer.go:262] Caches are synced for attach detach
	I0923 11:37:49.181795       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0923 11:37:49.267506       1 shared_informer.go:262] Caches are synced for resource quota
	I0923 11:37:49.312726       1 shared_informer.go:262] Caches are synced for resource quota
	I0923 11:37:49.711354       1 shared_informer.go:262] Caches are synced for garbage collector
	I0923 11:37:49.727696       1 shared_informer.go:262] Caches are synced for garbage collector
	I0923 11:37:49.727734       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [411dddc45e1c029c17c1ce3f81cc016fb5049b84298e2fd7ad920fbe1e707ff8] <==
	I0923 11:37:37.313747       1 node.go:163] Successfully retrieved node IP: 192.168.39.54
	I0923 11:37:37.313892       1 server_others.go:138] "Detected node IP" address="192.168.39.54"
	I0923 11:37:37.313938       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0923 11:37:37.362483       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0923 11:37:37.362500       1 server_others.go:206] "Using iptables Proxier"
	I0923 11:37:37.363118       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0923 11:37:37.364240       1 server.go:661] "Version info" version="v1.24.4"
	I0923 11:37:37.364373       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:37:37.365991       1 config.go:317] "Starting service config controller"
	I0923 11:37:37.366074       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0923 11:37:37.366135       1 config.go:226] "Starting endpoint slice config controller"
	I0923 11:37:37.366207       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0923 11:37:37.367080       1 config.go:444] "Starting node config controller"
	I0923 11:37:37.367135       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0923 11:37:37.476366       1 shared_informer.go:262] Caches are synced for node config
	I0923 11:37:37.476480       1 shared_informer.go:262] Caches are synced for service config
	I0923 11:37:37.476575       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [98f3daa7a23af28aa2b1cc7c620ff5aacbcf15c1cdee91cc9dab424c65e18aab] <==
	I0923 11:37:33.604567       1 serving.go:348] Generated self-signed cert in-memory
	W0923 11:37:36.654264       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 11:37:36.654411       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 11:37:36.654519       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 11:37:36.654545       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 11:37:36.698696       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0923 11:37:36.698776       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:37:36.712848       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 11:37:36.713033       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:37:36.716520       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0923 11:37:36.716769       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0923 11:37:36.814126       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.732665    1120 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: E0923 11:37:36.738370    1120 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9jwgs" podUID=f5e3ba78-532b-4290-a68f-389990db7612
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.794575    1120 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-431525"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.794764    1120 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-431525"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.801489    1120 setters.go:532] "Node became not ready" node="test-preload-431525" condition={Type:Ready Status:False LastHeartbeatTime:2024-09-23 11:37:36.801420233 +0000 UTC m=+5.260819447 LastTransitionTime:2024-09-23 11:37:36.801420233 +0000 UTC m=+5.260819447 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814012    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b3f621-1600-43b7-882c-9bfaf603d40a-xtables-lock\") pod \"kube-proxy-j82pd\" (UID: \"22b3f621-1600-43b7-882c-9bfaf603d40a\") " pod="kube-system/kube-proxy-j82pd"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814109    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npx6c\" (UniqueName: \"kubernetes.io/projected/7e961cf1-dd3a-42b6-a7de-b16d4f5d4865-kube-api-access-npx6c\") pod \"storage-provisioner\" (UID: \"7e961cf1-dd3a-42b6-a7de-b16d4f5d4865\") " pod="kube-system/storage-provisioner"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814232    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22b3f621-1600-43b7-882c-9bfaf603d40a-kube-proxy\") pod \"kube-proxy-j82pd\" (UID: \"22b3f621-1600-43b7-882c-9bfaf603d40a\") " pod="kube-system/kube-proxy-j82pd"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814287    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b3f621-1600-43b7-882c-9bfaf603d40a-lib-modules\") pod \"kube-proxy-j82pd\" (UID: \"22b3f621-1600-43b7-882c-9bfaf603d40a\") " pod="kube-system/kube-proxy-j82pd"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814333    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fch6n\" (UniqueName: \"kubernetes.io/projected/22b3f621-1600-43b7-882c-9bfaf603d40a-kube-api-access-fch6n\") pod \"kube-proxy-j82pd\" (UID: \"22b3f621-1600-43b7-882c-9bfaf603d40a\") " pod="kube-system/kube-proxy-j82pd"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814380    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7e961cf1-dd3a-42b6-a7de-b16d4f5d4865-tmp\") pod \"storage-provisioner\" (UID: \"7e961cf1-dd3a-42b6-a7de-b16d4f5d4865\") " pod="kube-system/storage-provisioner"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814430    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume\") pod \"coredns-6d4b75cb6d-9jwgs\" (UID: \"f5e3ba78-532b-4290-a68f-389990db7612\") " pod="kube-system/coredns-6d4b75cb6d-9jwgs"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814480    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98lkw\" (UniqueName: \"kubernetes.io/projected/f5e3ba78-532b-4290-a68f-389990db7612-kube-api-access-98lkw\") pod \"coredns-6d4b75cb6d-9jwgs\" (UID: \"f5e3ba78-532b-4290-a68f-389990db7612\") " pod="kube-system/coredns-6d4b75cb6d-9jwgs"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: I0923 11:37:36.814530    1120 reconciler.go:159] "Reconciler: start to sync state"
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: E0923 11:37:36.919886    1120 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 11:37:36 test-preload-431525 kubelet[1120]: E0923 11:37:36.919982    1120 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume podName:f5e3ba78-532b-4290-a68f-389990db7612 nodeName:}" failed. No retries permitted until 2024-09-23 11:37:37.419947474 +0000 UTC m=+5.879346702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume") pod "coredns-6d4b75cb6d-9jwgs" (UID: "f5e3ba78-532b-4290-a68f-389990db7612") : object "kube-system"/"coredns" not registered
	Sep 23 11:37:37 test-preload-431525 kubelet[1120]: E0923 11:37:37.423656    1120 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 11:37:37 test-preload-431525 kubelet[1120]: E0923 11:37:37.423749    1120 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume podName:f5e3ba78-532b-4290-a68f-389990db7612 nodeName:}" failed. No retries permitted until 2024-09-23 11:37:38.423734412 +0000 UTC m=+6.883133627 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume") pod "coredns-6d4b75cb6d-9jwgs" (UID: "f5e3ba78-532b-4290-a68f-389990db7612") : object "kube-system"/"coredns" not registered
	Sep 23 11:37:37 test-preload-431525 kubelet[1120]: I0923 11:37:37.783727    1120 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=097d1de2-eebc-43f4-a83e-f95d176c6697 path="/var/lib/kubelet/pods/097d1de2-eebc-43f4-a83e-f95d176c6697/volumes"
	Sep 23 11:37:38 test-preload-431525 kubelet[1120]: E0923 11:37:38.429663    1120 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 11:37:38 test-preload-431525 kubelet[1120]: E0923 11:37:38.429752    1120 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume podName:f5e3ba78-532b-4290-a68f-389990db7612 nodeName:}" failed. No retries permitted until 2024-09-23 11:37:40.429734592 +0000 UTC m=+8.889133807 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume") pod "coredns-6d4b75cb6d-9jwgs" (UID: "f5e3ba78-532b-4290-a68f-389990db7612") : object "kube-system"/"coredns" not registered
	Sep 23 11:37:38 test-preload-431525 kubelet[1120]: E0923 11:37:38.774252    1120 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9jwgs" podUID=f5e3ba78-532b-4290-a68f-389990db7612
	Sep 23 11:37:40 test-preload-431525 kubelet[1120]: E0923 11:37:40.449480    1120 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 11:37:40 test-preload-431525 kubelet[1120]: E0923 11:37:40.449548    1120 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume podName:f5e3ba78-532b-4290-a68f-389990db7612 nodeName:}" failed. No retries permitted until 2024-09-23 11:37:44.449534314 +0000 UTC m=+12.908933539 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f5e3ba78-532b-4290-a68f-389990db7612-config-volume") pod "coredns-6d4b75cb6d-9jwgs" (UID: "f5e3ba78-532b-4290-a68f-389990db7612") : object "kube-system"/"coredns" not registered
	Sep 23 11:37:40 test-preload-431525 kubelet[1120]: E0923 11:37:40.774245    1120 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9jwgs" podUID=f5e3ba78-532b-4290-a68f-389990db7612
	
	
	==> storage-provisioner [59bd71c4676fdd6fccf6edf3e13ea69a2f6cb58616ee009964301d9040d0d2ea] <==
	I0923 11:37:37.523440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-431525 -n test-preload-431525
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-431525 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-431525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-431525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-431525: (1.156558814s)
--- FAIL: TestPreload (173.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (427.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.363276329s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-193704] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-193704" primary control-plane node in "kubernetes-upgrade-193704" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:39:51.370225   49135 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:39:51.370504   49135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:39:51.370514   49135 out.go:358] Setting ErrFile to fd 2...
	I0923 11:39:51.370520   49135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:39:51.370725   49135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:39:51.371239   49135 out.go:352] Setting JSON to false
	I0923 11:39:51.372150   49135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4934,"bootTime":1727086657,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:39:51.372227   49135 start.go:139] virtualization: kvm guest
	I0923 11:39:51.374397   49135 out.go:177] * [kubernetes-upgrade-193704] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:39:51.375974   49135 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:39:51.376009   49135 notify.go:220] Checking for updates...
	I0923 11:39:51.378312   49135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:39:51.380951   49135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:39:51.383719   49135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:39:51.384992   49135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:39:51.386311   49135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:39:51.387760   49135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:39:51.423079   49135 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 11:39:51.424738   49135 start.go:297] selected driver: kvm2
	I0923 11:39:51.424752   49135 start.go:901] validating driver "kvm2" against <nil>
	I0923 11:39:51.424765   49135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:39:51.425630   49135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:39:51.425758   49135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:39:51.441716   49135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:39:51.441801   49135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:39:51.442153   49135 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:39:51.442188   49135 cni.go:84] Creating CNI manager for ""
	I0923 11:39:51.442259   49135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 11:39:51.442271   49135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 11:39:51.442341   49135 start.go:340] cluster config:
	{Name:kubernetes-upgrade-193704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-193704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:39:51.442479   49135 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:39:51.444345   49135 out.go:177] * Starting "kubernetes-upgrade-193704" primary control-plane node in "kubernetes-upgrade-193704" cluster
	I0923 11:39:51.445525   49135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 11:39:51.445564   49135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0923 11:39:51.445582   49135 cache.go:56] Caching tarball of preloaded images
	I0923 11:39:51.445660   49135 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 11:39:51.445671   49135 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0923 11:39:51.445951   49135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/config.json ...
	I0923 11:39:51.445970   49135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/config.json: {Name:mk2e4fc18fd4d42603b7c3b2db763423196e49ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:39:51.446099   49135 start.go:360] acquireMachinesLock for kubernetes-upgrade-193704: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:39:51.446128   49135 start.go:364] duration metric: took 15.208µs to acquireMachinesLock for "kubernetes-upgrade-193704"
	I0923 11:39:51.446144   49135 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-193704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-193704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 11:39:51.446197   49135 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 11:39:51.447904   49135 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 11:39:51.448041   49135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:39:51.448084   49135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:39:51.463157   49135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39963
	I0923 11:39:51.463542   49135 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:39:51.464139   49135 main.go:141] libmachine: Using API Version  1
	I0923 11:39:51.464158   49135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:39:51.464535   49135 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:39:51.464770   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetMachineName
	I0923 11:39:51.464904   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:39:51.465039   49135 start.go:159] libmachine.API.Create for "kubernetes-upgrade-193704" (driver="kvm2")
	I0923 11:39:51.465069   49135 client.go:168] LocalClient.Create starting
	I0923 11:39:51.465108   49135 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 11:39:51.465139   49135 main.go:141] libmachine: Decoding PEM data...
	I0923 11:39:51.465176   49135 main.go:141] libmachine: Parsing certificate...
	I0923 11:39:51.465243   49135 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 11:39:51.465281   49135 main.go:141] libmachine: Decoding PEM data...
	I0923 11:39:51.465295   49135 main.go:141] libmachine: Parsing certificate...
	I0923 11:39:51.465318   49135 main.go:141] libmachine: Running pre-create checks...
	I0923 11:39:51.465326   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .PreCreateCheck
	I0923 11:39:51.465746   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetConfigRaw
	I0923 11:39:51.466174   49135 main.go:141] libmachine: Creating machine...
	I0923 11:39:51.466191   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .Create
	I0923 11:39:51.466308   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Creating KVM machine...
	I0923 11:39:51.467545   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found existing default KVM network
	I0923 11:39:51.468195   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:51.468053   49200 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b70}
	I0923 11:39:51.468227   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | created network xml: 
	I0923 11:39:51.468238   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | <network>
	I0923 11:39:51.468245   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |   <name>mk-kubernetes-upgrade-193704</name>
	I0923 11:39:51.468256   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |   <dns enable='no'/>
	I0923 11:39:51.468279   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |   
	I0923 11:39:51.468297   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 11:39:51.468309   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |     <dhcp>
	I0923 11:39:51.468320   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 11:39:51.468327   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |     </dhcp>
	I0923 11:39:51.468332   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |   </ip>
	I0923 11:39:51.468339   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG |   
	I0923 11:39:51.468348   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | </network>
	I0923 11:39:51.468359   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | 
	I0923 11:39:51.473530   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | trying to create private KVM network mk-kubernetes-upgrade-193704 192.168.39.0/24...
	I0923 11:39:51.550472   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704 ...
	I0923 11:39:51.550507   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 11:39:51.550526   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | private KVM network mk-kubernetes-upgrade-193704 192.168.39.0/24 created
	I0923 11:39:51.550545   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:51.549832   49200 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:39:51.550569   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 11:39:51.813945   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:51.813848   49200 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa...
	I0923 11:39:51.919164   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:51.919045   49200 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/kubernetes-upgrade-193704.rawdisk...
	I0923 11:39:51.919192   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Writing magic tar header
	I0923 11:39:51.919215   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Writing SSH key tar header
	I0923 11:39:51.919233   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:51.919183   49200 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704 ...
	I0923 11:39:51.919347   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704
	I0923 11:39:51.919376   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 11:39:51.919390   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704 (perms=drwx------)
	I0923 11:39:51.919402   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:39:51.919425   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 11:39:51.919436   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 11:39:51.919449   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Checking permissions on dir: /home/jenkins
	I0923 11:39:51.919459   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Checking permissions on dir: /home
	I0923 11:39:51.919471   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 11:39:51.919485   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 11:39:51.919495   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Skipping /home - not owner
	I0923 11:39:51.919510   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 11:39:51.919521   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 11:39:51.919535   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 11:39:51.919548   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Creating domain...
	I0923 11:39:51.920507   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) define libvirt domain using xml: 
	I0923 11:39:51.920531   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) <domain type='kvm'>
	I0923 11:39:51.920544   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   <name>kubernetes-upgrade-193704</name>
	I0923 11:39:51.920567   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   <memory unit='MiB'>2200</memory>
	I0923 11:39:51.920595   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   <vcpu>2</vcpu>
	I0923 11:39:51.920617   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   <features>
	I0923 11:39:51.920635   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <acpi/>
	I0923 11:39:51.920645   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <apic/>
	I0923 11:39:51.920653   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <pae/>
	I0923 11:39:51.920664   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     
	I0923 11:39:51.920676   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   </features>
	I0923 11:39:51.920687   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   <cpu mode='host-passthrough'>
	I0923 11:39:51.920704   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   
	I0923 11:39:51.920716   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   </cpu>
	I0923 11:39:51.920725   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   <os>
	I0923 11:39:51.920736   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <type>hvm</type>
	I0923 11:39:51.920746   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <boot dev='cdrom'/>
	I0923 11:39:51.920755   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <boot dev='hd'/>
	I0923 11:39:51.920765   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <bootmenu enable='no'/>
	I0923 11:39:51.920776   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   </os>
	I0923 11:39:51.920789   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   <devices>
	I0923 11:39:51.920808   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <disk type='file' device='cdrom'>
	I0923 11:39:51.920834   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/boot2docker.iso'/>
	I0923 11:39:51.920853   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <target dev='hdc' bus='scsi'/>
	I0923 11:39:51.920861   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <readonly/>
	I0923 11:39:51.920874   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     </disk>
	I0923 11:39:51.920886   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <disk type='file' device='disk'>
	I0923 11:39:51.920914   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 11:39:51.920943   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/kubernetes-upgrade-193704.rawdisk'/>
	I0923 11:39:51.920959   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <target dev='hda' bus='virtio'/>
	I0923 11:39:51.920970   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     </disk>
	I0923 11:39:51.920984   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <interface type='network'>
	I0923 11:39:51.920996   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <source network='mk-kubernetes-upgrade-193704'/>
	I0923 11:39:51.921016   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <model type='virtio'/>
	I0923 11:39:51.921034   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     </interface>
	I0923 11:39:51.921045   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <interface type='network'>
	I0923 11:39:51.921053   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <source network='default'/>
	I0923 11:39:51.921064   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <model type='virtio'/>
	I0923 11:39:51.921074   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     </interface>
	I0923 11:39:51.921082   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <serial type='pty'>
	I0923 11:39:51.921092   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <target port='0'/>
	I0923 11:39:51.921108   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     </serial>
	I0923 11:39:51.921122   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <console type='pty'>
	I0923 11:39:51.921135   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <target type='serial' port='0'/>
	I0923 11:39:51.921143   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     </console>
	I0923 11:39:51.921154   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     <rng model='virtio'>
	I0923 11:39:51.921165   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)       <backend model='random'>/dev/random</backend>
	I0923 11:39:51.921175   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     </rng>
	I0923 11:39:51.921184   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     
	I0923 11:39:51.921196   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)     
	I0923 11:39:51.921209   49135 main.go:141] libmachine: (kubernetes-upgrade-193704)   </devices>
	I0923 11:39:51.921225   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) </domain>
	I0923 11:39:51.921237   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) 
	I0923 11:39:51.924993   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:03:b2:5c in network default
	I0923 11:39:51.925583   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Ensuring networks are active...
	I0923 11:39:51.925609   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:51.926150   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Ensuring network default is active
	I0923 11:39:51.926340   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Ensuring network mk-kubernetes-upgrade-193704 is active
	I0923 11:39:51.926767   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Getting domain xml...
	I0923 11:39:51.927463   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Creating domain...
	I0923 11:39:53.343860   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Waiting to get IP...
	I0923 11:39:53.344833   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:53.345226   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:53.345275   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:53.345203   49200 retry.go:31] will retry after 226.943915ms: waiting for machine to come up
	I0923 11:39:53.573728   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:53.574167   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:53.574194   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:53.574122   49200 retry.go:31] will retry after 235.032305ms: waiting for machine to come up
	I0923 11:39:53.810424   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:53.810811   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:53.810840   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:53.810789   49200 retry.go:31] will retry after 425.017473ms: waiting for machine to come up
	I0923 11:39:54.238029   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:54.238542   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:54.238573   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:54.238505   49200 retry.go:31] will retry after 557.94125ms: waiting for machine to come up
	I0923 11:39:54.798279   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:54.798700   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:54.798723   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:54.798674   49200 retry.go:31] will retry after 755.671699ms: waiting for machine to come up
	I0923 11:39:55.555715   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:55.556154   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:55.556180   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:55.556104   49200 retry.go:31] will retry after 886.341128ms: waiting for machine to come up
	I0923 11:39:56.444258   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:56.444742   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:56.444772   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:56.444712   49200 retry.go:31] will retry after 1.180909448s: waiting for machine to come up
	I0923 11:39:57.627330   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:57.627679   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:57.627704   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:57.627613   49200 retry.go:31] will retry after 1.016027941s: waiting for machine to come up
	I0923 11:39:58.645000   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:39:58.645438   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:39:58.645464   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:39:58.645396   49200 retry.go:31] will retry after 1.50919397s: waiting for machine to come up
	I0923 11:40:00.156771   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:00.157229   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:40:00.157257   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:40:00.157184   49200 retry.go:31] will retry after 1.411270113s: waiting for machine to come up
	I0923 11:40:01.569638   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:01.570021   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:40:01.570047   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:40:01.569985   49200 retry.go:31] will retry after 2.156952278s: waiting for machine to come up
	I0923 11:40:03.729453   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:03.729854   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:40:03.729886   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:40:03.729816   49200 retry.go:31] will retry after 3.402821135s: waiting for machine to come up
	I0923 11:40:07.134594   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:07.134886   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:40:07.134913   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:40:07.134821   49200 retry.go:31] will retry after 2.769160559s: waiting for machine to come up
	I0923 11:40:09.906281   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:09.906644   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find current IP address of domain kubernetes-upgrade-193704 in network mk-kubernetes-upgrade-193704
	I0923 11:40:09.906665   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | I0923 11:40:09.906604   49200 retry.go:31] will retry after 5.249045651s: waiting for machine to come up
	I0923 11:40:15.159435   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.160027   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has current primary IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.160050   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Found IP for machine: 192.168.39.77
	I0923 11:40:15.160062   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Reserving static IP address...
	I0923 11:40:15.160484   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-193704", mac: "52:54:00:6e:e9:38", ip: "192.168.39.77"} in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.234715   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Reserved static IP address: 192.168.39.77
	I0923 11:40:15.234754   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Waiting for SSH to be available...
	I0923 11:40:15.234764   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Getting to WaitForSSH function...
	I0923 11:40:15.237246   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.237749   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:15.237785   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.237891   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Using SSH client type: external
	I0923 11:40:15.237914   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa (-rw-------)
	I0923 11:40:15.237957   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 11:40:15.237994   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | About to run SSH command:
	I0923 11:40:15.238011   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | exit 0
	I0923 11:40:15.369302   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | SSH cmd err, output: <nil>: 
	I0923 11:40:15.369568   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) KVM machine creation complete!
	I0923 11:40:15.369909   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetConfigRaw
	I0923 11:40:15.370532   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:40:15.370684   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:40:15.370822   49135 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 11:40:15.370837   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetState
	I0923 11:40:15.372186   49135 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 11:40:15.372201   49135 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 11:40:15.372207   49135 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 11:40:15.372214   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:15.374846   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.375260   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:15.375290   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.375368   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:15.375547   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:15.375710   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:15.375880   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:15.376048   49135 main.go:141] libmachine: Using SSH client type: native
	I0923 11:40:15.376323   49135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:40:15.376337   49135 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 11:40:15.480839   49135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:40:15.480877   49135 main.go:141] libmachine: Detecting the provisioner...
	I0923 11:40:15.480887   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:15.484130   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.484692   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:15.484720   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.484926   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:15.485108   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:15.485267   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:15.485471   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:15.485625   49135 main.go:141] libmachine: Using SSH client type: native
	I0923 11:40:15.485858   49135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:40:15.485873   49135 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 11:40:15.594368   49135 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 11:40:15.594477   49135 main.go:141] libmachine: found compatible host: buildroot
	I0923 11:40:15.594491   49135 main.go:141] libmachine: Provisioning with buildroot...
	I0923 11:40:15.594501   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetMachineName
	I0923 11:40:15.594732   49135 buildroot.go:166] provisioning hostname "kubernetes-upgrade-193704"
	I0923 11:40:15.594762   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetMachineName
	I0923 11:40:15.594945   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:15.597727   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.598151   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:15.598177   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.598308   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:15.598480   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:15.598605   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:15.598733   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:15.598904   49135 main.go:141] libmachine: Using SSH client type: native
	I0923 11:40:15.599074   49135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:40:15.599086   49135 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-193704 && echo "kubernetes-upgrade-193704" | sudo tee /etc/hostname
	I0923 11:40:15.720036   49135 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-193704
	
	I0923 11:40:15.720066   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:15.723076   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.723477   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:15.723511   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.723692   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:15.723864   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:15.723989   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:15.724117   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:15.724292   49135 main.go:141] libmachine: Using SSH client type: native
	I0923 11:40:15.724526   49135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:40:15.724555   49135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-193704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-193704/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-193704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:40:15.844279   49135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:40:15.844310   49135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 11:40:15.844370   49135 buildroot.go:174] setting up certificates
	I0923 11:40:15.844385   49135 provision.go:84] configureAuth start
	I0923 11:40:15.844401   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetMachineName
	I0923 11:40:15.844652   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetIP
	I0923 11:40:15.847280   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.847616   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:15.847648   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.847798   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:15.849980   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.850317   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:15.850374   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:15.850481   49135 provision.go:143] copyHostCerts
	I0923 11:40:15.850543   49135 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 11:40:15.850560   49135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:40:15.850622   49135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 11:40:15.850719   49135 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 11:40:15.850727   49135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:40:15.850756   49135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 11:40:15.850809   49135 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 11:40:15.850818   49135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:40:15.850842   49135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 11:40:15.850884   49135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-193704 san=[127.0.0.1 192.168.39.77 kubernetes-upgrade-193704 localhost minikube]
	I0923 11:40:16.297710   49135 provision.go:177] copyRemoteCerts
	I0923 11:40:16.297773   49135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:40:16.297797   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:16.300469   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.300805   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.300840   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.300949   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:16.301140   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:16.301308   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:16.301433   49135 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa Username:docker}
	I0923 11:40:16.384013   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:40:16.413831   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0923 11:40:16.441261   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:40:16.468436   49135 provision.go:87] duration metric: took 624.019992ms to configureAuth
	I0923 11:40:16.468477   49135 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:40:16.468684   49135 config.go:182] Loaded profile config "kubernetes-upgrade-193704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0923 11:40:16.468769   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:16.471543   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.471871   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.471905   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.472036   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:16.472217   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:16.472348   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:16.472476   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:16.472606   49135 main.go:141] libmachine: Using SSH client type: native
	I0923 11:40:16.472789   49135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:40:16.472809   49135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 11:40:16.722715   49135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 11:40:16.722744   49135 main.go:141] libmachine: Checking connection to Docker...
	I0923 11:40:16.722755   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetURL
	I0923 11:40:16.724293   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | Using libvirt version 6000000
	I0923 11:40:16.726708   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.727104   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.727128   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.727327   49135 main.go:141] libmachine: Docker is up and running!
	I0923 11:40:16.727338   49135 main.go:141] libmachine: Reticulating splines...
	I0923 11:40:16.727344   49135 client.go:171] duration metric: took 25.262267988s to LocalClient.Create
	I0923 11:40:16.727371   49135 start.go:167] duration metric: took 25.262332202s to libmachine.API.Create "kubernetes-upgrade-193704"
	I0923 11:40:16.727383   49135 start.go:293] postStartSetup for "kubernetes-upgrade-193704" (driver="kvm2")
	I0923 11:40:16.727395   49135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:40:16.727418   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:40:16.727681   49135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:40:16.727708   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:16.730306   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.730685   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.730715   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.730872   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:16.731049   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:16.731251   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:16.731398   49135 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa Username:docker}
	I0923 11:40:16.816042   49135 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:40:16.820618   49135 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:40:16.820650   49135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 11:40:16.820753   49135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 11:40:16.820893   49135 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 11:40:16.821048   49135 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:40:16.830849   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:40:16.856403   49135 start.go:296] duration metric: took 129.002603ms for postStartSetup
	I0923 11:40:16.856467   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetConfigRaw
	I0923 11:40:16.857146   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetIP
	I0923 11:40:16.859761   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.860167   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.860220   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.860470   49135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/config.json ...
	I0923 11:40:16.860650   49135 start.go:128] duration metric: took 25.414445098s to createHost
	I0923 11:40:16.860674   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:16.863137   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.863504   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.863543   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.863691   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:16.863882   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:16.864034   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:16.864191   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:16.864341   49135 main.go:141] libmachine: Using SSH client type: native
	I0923 11:40:16.864499   49135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:40:16.864514   49135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:40:16.970221   49135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091616.935448952
	
	I0923 11:40:16.970248   49135 fix.go:216] guest clock: 1727091616.935448952
	I0923 11:40:16.970258   49135 fix.go:229] Guest: 2024-09-23 11:40:16.935448952 +0000 UTC Remote: 2024-09-23 11:40:16.860662147 +0000 UTC m=+25.534888736 (delta=74.786805ms)
	I0923 11:40:16.970283   49135 fix.go:200] guest clock delta is within tolerance: 74.786805ms
	I0923 11:40:16.970289   49135 start.go:83] releasing machines lock for "kubernetes-upgrade-193704", held for 25.524151889s
	I0923 11:40:16.970317   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:40:16.970584   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetIP
	I0923 11:40:16.973627   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.974019   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.974048   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.974219   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:40:16.974693   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:40:16.974885   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:40:16.974983   49135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:40:16.975036   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:16.975083   49135 ssh_runner.go:195] Run: cat /version.json
	I0923 11:40:16.975103   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:40:16.977796   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.978009   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.978147   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.978172   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.978361   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:16.978491   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:16.978508   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:16.978527   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:16.978697   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:16.978703   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:40:16.978845   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:40:16.978855   49135 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa Username:docker}
	I0923 11:40:16.978961   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:40:16.979109   49135 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa Username:docker}
	I0923 11:40:17.083788   49135 ssh_runner.go:195] Run: systemctl --version
	I0923 11:40:17.092034   49135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 11:40:17.265528   49135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:40:17.273432   49135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:40:17.273513   49135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:40:17.295893   49135 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 11:40:17.295921   49135 start.go:495] detecting cgroup driver to use...
	I0923 11:40:17.295995   49135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:40:17.316423   49135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:40:17.331971   49135 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:40:17.332048   49135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:40:17.351100   49135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:40:17.368709   49135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:40:17.505918   49135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:40:17.670940   49135 docker.go:233] disabling docker service ...
	I0923 11:40:17.670995   49135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:40:17.685984   49135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:40:17.699688   49135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:40:17.824826   49135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:40:17.948904   49135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:40:17.963121   49135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:40:17.984304   49135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0923 11:40:17.984376   49135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:40:17.996416   49135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 11:40:17.996474   49135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:40:18.009556   49135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:40:18.020531   49135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:40:18.031251   49135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:40:18.041976   49135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:40:18.052145   49135 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 11:40:18.052215   49135 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 11:40:18.065944   49135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:40:18.075808   49135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:40:18.230096   49135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 11:40:18.333246   49135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 11:40:18.333302   49135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 11:40:18.337988   49135 start.go:563] Will wait 60s for crictl version
	I0923 11:40:18.338051   49135 ssh_runner.go:195] Run: which crictl
	I0923 11:40:18.342090   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:40:18.393702   49135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 11:40:18.393769   49135 ssh_runner.go:195] Run: crio --version
	I0923 11:40:18.423052   49135 ssh_runner.go:195] Run: crio --version
	I0923 11:40:18.453163   49135 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0923 11:40:18.454273   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetIP
	I0923 11:40:18.457449   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:18.457856   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:40:07 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:40:18.457878   49135 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:40:18.458156   49135 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 11:40:18.462550   49135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:40:18.475421   49135 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-193704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-193704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:40:18.475542   49135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 11:40:18.475590   49135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:40:18.508144   49135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0923 11:40:18.508209   49135 ssh_runner.go:195] Run: which lz4
	I0923 11:40:18.512311   49135 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 11:40:18.516464   49135 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 11:40:18.516497   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0923 11:40:20.234462   49135 crio.go:462] duration metric: took 1.722159725s to copy over tarball
	I0923 11:40:20.234554   49135 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 11:40:22.780067   49135 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.545486917s)
	I0923 11:40:22.780106   49135 crio.go:469] duration metric: took 2.545608302s to extract the tarball
	I0923 11:40:22.780117   49135 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 11:40:22.823323   49135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:40:22.875334   49135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0923 11:40:22.875361   49135 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 11:40:22.875439   49135 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:40:22.875449   49135 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:40:22.875485   49135 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:40:22.875498   49135 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0923 11:40:22.875449   49135 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:40:22.875532   49135 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0923 11:40:22.875531   49135 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:40:22.875659   49135 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0923 11:40:22.877054   49135 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:40:22.877066   49135 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0923 11:40:22.877089   49135 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0923 11:40:22.877083   49135 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:40:22.877086   49135 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0923 11:40:22.877107   49135 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:40:22.877059   49135 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:40:22.877192   49135 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:40:23.061075   49135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0923 11:40:23.080702   49135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:40:23.094764   49135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:40:23.105941   49135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0923 11:40:23.109223   49135 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0923 11:40:23.109266   49135 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0923 11:40:23.109309   49135 ssh_runner.go:195] Run: which crictl
	I0923 11:40:23.115075   49135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:40:23.145963   49135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:40:23.151795   49135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0923 11:40:23.176157   49135 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0923 11:40:23.176207   49135 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:40:23.176266   49135 ssh_runner.go:195] Run: which crictl
	I0923 11:40:23.193825   49135 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0923 11:40:23.193875   49135 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:40:23.193924   49135 ssh_runner.go:195] Run: which crictl
	I0923 11:40:23.257364   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0923 11:40:23.257430   49135 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0923 11:40:23.257544   49135 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0923 11:40:23.257583   49135 ssh_runner.go:195] Run: which crictl
	I0923 11:40:23.257434   49135 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0923 11:40:23.257661   49135 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:40:23.257709   49135 ssh_runner.go:195] Run: which crictl
	I0923 11:40:23.277951   49135 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0923 11:40:23.277987   49135 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:40:23.278036   49135 ssh_runner.go:195] Run: which crictl
	I0923 11:40:23.325087   49135 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0923 11:40:23.325134   49135 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0923 11:40:23.325142   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:40:23.325171   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:40:23.325209   49135 ssh_runner.go:195] Run: which crictl
	I0923 11:40:23.325234   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0923 11:40:23.325258   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0923 11:40:23.325289   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:40:23.325290   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:40:23.471443   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:40:23.471537   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:40:23.471588   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:40:23.471657   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0923 11:40:23.471709   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0923 11:40:23.471714   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0923 11:40:23.471769   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:40:23.628234   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:40:23.628265   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:40:23.628276   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:40:23.628312   49135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0923 11:40:23.628396   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0923 11:40:23.628419   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0923 11:40:23.628485   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:40:23.750311   49135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0923 11:40:23.750474   49135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0923 11:40:23.756286   49135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0923 11:40:23.756434   49135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0923 11:40:23.756929   49135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0923 11:40:23.756976   49135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0923 11:40:23.787861   49135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0923 11:40:24.146620   49135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:40:24.296992   49135 cache_images.go:92] duration metric: took 1.421604583s to LoadCachedImages
	W0923 11:40:24.297092   49135 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19689-3961/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0923 11:40:24.297110   49135 kubeadm.go:934] updating node { 192.168.39.77 8443 v1.20.0 crio true true} ...
	I0923 11:40:24.297240   49135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-193704 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-193704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:40:24.297319   49135 ssh_runner.go:195] Run: crio config
	I0923 11:40:24.348105   49135 cni.go:84] Creating CNI manager for ""
	I0923 11:40:24.348134   49135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 11:40:24.348145   49135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:40:24.348173   49135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-193704 NodeName:kubernetes-upgrade-193704 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0923 11:40:24.348342   49135 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-193704"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:40:24.348420   49135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0923 11:40:24.359087   49135 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:40:24.359175   49135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:40:24.369411   49135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0923 11:40:24.388050   49135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:40:24.405721   49135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0923 11:40:24.425788   49135 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0923 11:40:24.429727   49135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:40:24.442425   49135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:40:24.571399   49135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:40:24.588269   49135 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704 for IP: 192.168.39.77
	I0923 11:40:24.588299   49135 certs.go:194] generating shared ca certs ...
	I0923 11:40:24.588322   49135 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:40:24.588522   49135 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 11:40:24.588583   49135 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 11:40:24.588598   49135 certs.go:256] generating profile certs ...
	I0923 11:40:24.588672   49135 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/client.key
	I0923 11:40:24.588691   49135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/client.crt with IP's: []
	I0923 11:40:24.669696   49135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/client.crt ...
	I0923 11:40:24.669725   49135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/client.crt: {Name:mk0f3d32385fe2bd00517b44ae22f36dc6edeaa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:40:24.669882   49135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/client.key ...
	I0923 11:40:24.669894   49135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/client.key: {Name:mkbffce3f420ae3a5cf11617cd61b616e64e235b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:40:24.669969   49135 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.key.c7b3f995
	I0923 11:40:24.669985   49135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.crt.c7b3f995 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77]
	I0923 11:40:25.042449   49135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.crt.c7b3f995 ...
	I0923 11:40:25.042482   49135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.crt.c7b3f995: {Name:mk1a590839b9aec106cbde58f223462bdb0c8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:40:25.042647   49135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.key.c7b3f995 ...
	I0923 11:40:25.042660   49135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.key.c7b3f995: {Name:mkd44e6016fa92ca488092d4bc7f91b1a901599c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:40:25.042749   49135 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.crt.c7b3f995 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.crt
	I0923 11:40:25.042840   49135 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.key.c7b3f995 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.key
	I0923 11:40:25.042914   49135 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.key
	I0923 11:40:25.042937   49135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.crt with IP's: []
	I0923 11:40:25.140391   49135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.crt ...
	I0923 11:40:25.140428   49135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.crt: {Name:mk6821cb30298c4da1f409efbb18c66983d831c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:40:25.140613   49135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.key ...
	I0923 11:40:25.140630   49135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.key: {Name:mk5f648e19f9ed38eab0643a698407ef665dd8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:40:25.141072   49135 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 11:40:25.141297   49135 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 11:40:25.141334   49135 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:40:25.141419   49135 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:40:25.141473   49135 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:40:25.141535   49135 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 11:40:25.141612   49135 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:40:25.142978   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:40:25.172897   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:40:25.202097   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:40:25.229700   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:40:25.253813   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 11:40:25.278394   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 11:40:25.303602   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:40:25.328839   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:40:25.354508   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 11:40:25.381807   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:40:25.405847   49135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 11:40:25.433363   49135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:40:25.458786   49135 ssh_runner.go:195] Run: openssl version
	I0923 11:40:25.465403   49135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 11:40:25.480800   49135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 11:40:25.487305   49135 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:40:25.487367   49135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 11:40:25.498677   49135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:40:25.512312   49135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:40:25.525188   49135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:40:25.531268   49135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:40:25.531332   49135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:40:25.536883   49135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:40:25.548580   49135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 11:40:25.561913   49135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 11:40:25.566737   49135 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:40:25.566800   49135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 11:40:25.572962   49135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 11:40:25.586258   49135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:40:25.590458   49135 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:40:25.590521   49135 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-193704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-193704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:40:25.590612   49135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 11:40:25.590669   49135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:40:25.642924   49135 cri.go:89] found id: ""
	I0923 11:40:25.643021   49135 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:40:25.655379   49135 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:40:25.667770   49135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:40:25.678174   49135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:40:25.678199   49135 kubeadm.go:157] found existing configuration files:
	
	I0923 11:40:25.678244   49135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:40:25.690315   49135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:40:25.690371   49135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:40:25.701851   49135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:40:25.713239   49135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:40:25.713296   49135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:40:25.724919   49135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:40:25.736046   49135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:40:25.736105   49135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:40:25.747104   49135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:40:25.758691   49135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:40:25.758764   49135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:40:25.770718   49135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 11:40:26.067943   49135 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:42:23.859013   49135 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0923 11:42:23.859137   49135 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0923 11:42:23.861205   49135 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0923 11:42:23.861277   49135 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:42:23.861417   49135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:42:23.861568   49135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:42:23.861702   49135 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 11:42:23.861797   49135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:42:23.863478   49135 out.go:235]   - Generating certificates and keys ...
	I0923 11:42:23.863583   49135 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:42:23.863657   49135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:42:23.863743   49135 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:42:23.863812   49135 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:42:23.863886   49135 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:42:23.863951   49135 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:42:23.864008   49135 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:42:23.864167   49135 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-193704 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0923 11:42:23.864278   49135 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:42:23.864467   49135 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-193704 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0923 11:42:23.864585   49135 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:42:23.864680   49135 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:42:23.864745   49135 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:42:23.864824   49135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:42:23.864905   49135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:42:23.864985   49135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:42:23.865072   49135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:42:23.865148   49135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:42:23.865308   49135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:42:23.865462   49135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:42:23.865525   49135 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:42:23.865635   49135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:42:23.867294   49135 out.go:235]   - Booting up control plane ...
	I0923 11:42:23.867397   49135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:42:23.867488   49135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:42:23.867577   49135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:42:23.867671   49135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:42:23.867894   49135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 11:42:23.867969   49135 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0923 11:42:23.868037   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:42:23.868194   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:42:23.868253   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:42:23.868448   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:42:23.868513   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:42:23.868679   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:42:23.868763   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:42:23.868964   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:42:23.869050   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:42:23.869252   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:42:23.869263   49135 kubeadm.go:310] 
	I0923 11:42:23.869313   49135 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0923 11:42:23.869367   49135 kubeadm.go:310] 		timed out waiting for the condition
	I0923 11:42:23.869374   49135 kubeadm.go:310] 
	I0923 11:42:23.869445   49135 kubeadm.go:310] 	This error is likely caused by:
	I0923 11:42:23.869624   49135 kubeadm.go:310] 		- The kubelet is not running
	I0923 11:42:23.869772   49135 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0923 11:42:23.869781   49135 kubeadm.go:310] 
	I0923 11:42:23.869907   49135 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0923 11:42:23.869956   49135 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0923 11:42:23.869994   49135 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0923 11:42:23.870004   49135 kubeadm.go:310] 
	I0923 11:42:23.870146   49135 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0923 11:42:23.870271   49135 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0923 11:42:23.870282   49135 kubeadm.go:310] 
	I0923 11:42:23.870438   49135 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0923 11:42:23.870596   49135 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0923 11:42:23.870703   49135 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0923 11:42:23.870799   49135 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0923 11:42:23.870862   49135 kubeadm.go:310] 
	W0923 11:42:23.870966   49135 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-193704 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-193704 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-193704 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-193704 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0923 11:42:23.871007   49135 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0923 11:42:25.223262   49135 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.3522213s)
	I0923 11:42:25.223353   49135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:42:25.239104   49135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:42:25.250000   49135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:42:25.250044   49135 kubeadm.go:157] found existing configuration files:
	
	I0923 11:42:25.250109   49135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:42:25.262015   49135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:42:25.262087   49135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:42:25.273223   49135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:42:25.284250   49135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:42:25.284330   49135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:42:25.295756   49135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:42:25.309166   49135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:42:25.309231   49135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:42:25.320695   49135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:42:25.334617   49135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:42:25.334678   49135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:42:25.348866   49135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 11:42:25.432492   49135 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0923 11:42:25.432618   49135 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:42:25.621148   49135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:42:25.621302   49135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:42:25.621442   49135 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 11:42:25.838709   49135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:42:26.020262   49135 out.go:235]   - Generating certificates and keys ...
	I0923 11:42:26.020443   49135 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:42:26.020542   49135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:42:26.020672   49135 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 11:42:26.020766   49135 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 11:42:26.020880   49135 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 11:42:26.020974   49135 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 11:42:26.021068   49135 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 11:42:26.021185   49135 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 11:42:26.021306   49135 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 11:42:26.021430   49135 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 11:42:26.021485   49135 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 11:42:26.021559   49135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:42:26.021641   49135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:42:26.021755   49135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:42:26.273439   49135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:42:26.790257   49135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:42:26.809196   49135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:42:26.810488   49135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:42:26.810564   49135 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:42:26.972181   49135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:42:27.089870   49135 out.go:235]   - Booting up control plane ...
	I0923 11:42:27.090005   49135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:42:27.090075   49135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:42:27.090166   49135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:42:27.090305   49135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:42:27.090539   49135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 11:43:06.988563   49135 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0923 11:43:06.989115   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:43:06.989369   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:43:11.990163   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:43:11.990464   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:43:21.991057   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:43:21.991319   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:43:41.991437   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:43:41.991660   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:44:21.992747   49135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0923 11:44:21.993000   49135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0923 11:44:21.993035   49135 kubeadm.go:310] 
	I0923 11:44:21.993105   49135 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0923 11:44:21.993187   49135 kubeadm.go:310] 		timed out waiting for the condition
	I0923 11:44:21.993198   49135 kubeadm.go:310] 
	I0923 11:44:21.993253   49135 kubeadm.go:310] 	This error is likely caused by:
	I0923 11:44:21.993293   49135 kubeadm.go:310] 		- The kubelet is not running
	I0923 11:44:21.993418   49135 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0923 11:44:21.993430   49135 kubeadm.go:310] 
	I0923 11:44:21.993560   49135 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0923 11:44:21.993604   49135 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0923 11:44:21.993643   49135 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0923 11:44:21.993652   49135 kubeadm.go:310] 
	I0923 11:44:21.993772   49135 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0923 11:44:21.993880   49135 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0923 11:44:21.993892   49135 kubeadm.go:310] 
	I0923 11:44:21.994021   49135 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0923 11:44:21.994134   49135 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0923 11:44:21.994235   49135 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0923 11:44:21.994325   49135 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0923 11:44:21.994348   49135 kubeadm.go:310] 
	I0923 11:44:21.995604   49135 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:44:21.995722   49135 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0923 11:44:21.995824   49135 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0923 11:44:21.995908   49135 kubeadm.go:394] duration metric: took 3m56.405388s to StartCluster
	I0923 11:44:21.995962   49135 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 11:44:21.996017   49135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 11:44:22.043958   49135 cri.go:89] found id: ""
	I0923 11:44:22.043992   49135 logs.go:276] 0 containers: []
	W0923 11:44:22.044003   49135 logs.go:278] No container was found matching "kube-apiserver"
	I0923 11:44:22.044011   49135 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 11:44:22.044080   49135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 11:44:22.079316   49135 cri.go:89] found id: ""
	I0923 11:44:22.079349   49135 logs.go:276] 0 containers: []
	W0923 11:44:22.079360   49135 logs.go:278] No container was found matching "etcd"
	I0923 11:44:22.079369   49135 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 11:44:22.079433   49135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 11:44:22.123209   49135 cri.go:89] found id: ""
	I0923 11:44:22.123238   49135 logs.go:276] 0 containers: []
	W0923 11:44:22.123249   49135 logs.go:278] No container was found matching "coredns"
	I0923 11:44:22.123256   49135 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 11:44:22.123326   49135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 11:44:22.160774   49135 cri.go:89] found id: ""
	I0923 11:44:22.160807   49135 logs.go:276] 0 containers: []
	W0923 11:44:22.160818   49135 logs.go:278] No container was found matching "kube-scheduler"
	I0923 11:44:22.160826   49135 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 11:44:22.160892   49135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 11:44:22.217579   49135 cri.go:89] found id: ""
	I0923 11:44:22.217601   49135 logs.go:276] 0 containers: []
	W0923 11:44:22.217610   49135 logs.go:278] No container was found matching "kube-proxy"
	I0923 11:44:22.217618   49135 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 11:44:22.217672   49135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 11:44:22.257965   49135 cri.go:89] found id: ""
	I0923 11:44:22.257996   49135 logs.go:276] 0 containers: []
	W0923 11:44:22.258015   49135 logs.go:278] No container was found matching "kube-controller-manager"
	I0923 11:44:22.258023   49135 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 11:44:22.258095   49135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 11:44:22.297374   49135 cri.go:89] found id: ""
	I0923 11:44:22.297420   49135 logs.go:276] 0 containers: []
	W0923 11:44:22.297431   49135 logs.go:278] No container was found matching "kindnet"
	I0923 11:44:22.297442   49135 logs.go:123] Gathering logs for container status ...
	I0923 11:44:22.297458   49135 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 11:44:22.347398   49135 logs.go:123] Gathering logs for kubelet ...
	I0923 11:44:22.347432   49135 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 11:44:22.415186   49135 logs.go:123] Gathering logs for dmesg ...
	I0923 11:44:22.415227   49135 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 11:44:22.432226   49135 logs.go:123] Gathering logs for describe nodes ...
	I0923 11:44:22.432253   49135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0923 11:44:22.566962   49135 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0923 11:44:22.566987   49135 logs.go:123] Gathering logs for CRI-O ...
	I0923 11:44:22.567001   49135 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0923 11:44:22.672783   49135 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0923 11:44:22.672846   49135 out.go:270] * 
	* 
	W0923 11:44:22.672904   49135 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0923 11:44:22.672916   49135 out.go:270] * 
	* 
	W0923 11:44:22.673798   49135 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 11:44:22.676794   49135 out.go:201] 
	W0923 11:44:22.677955   49135 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0923 11:44:22.677992   49135 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0923 11:44:22.678011   49135 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0923 11:44:22.679368   49135 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-193704
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-193704: (1.849176105s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-193704 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-193704 status --format={{.Host}}: exit status 7 (60.859004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.850040378s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-193704 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (78.204967ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-193704] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-193704
	    minikube start -p kubernetes-upgrade-193704 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1937042 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-193704 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0923 11:45:38.502991   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:45:40.509523   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-193704 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.366890184s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-23 11:46:55.000212394 +0000 UTC m=+5151.069530997
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-193704 -n kubernetes-upgrade-193704
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-193704 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-193704 logs -n 25: (1.647359083s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-605245                       | pause-605245              | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC | 23 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-605245                       | pause-605245              | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC | 23 Sep 24 11:43 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-605245                       | pause-605245              | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC | 23 Sep 24 11:43 UTC |
	| start   | -p force-systemd-flag-936120          | force-systemd-flag-936120 | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC | 23 Sep 24 11:44 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-496732             | running-upgrade-496732    | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC | 23 Sep 24 11:43 UTC |
	| ssh     | -p NoKubernetes-717494 sudo           | NoKubernetes-717494       | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-717494                | NoKubernetes-717494       | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC | 23 Sep 24 11:43 UTC |
	| start   | -p force-systemd-env-694064           | force-systemd-env-694064  | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC | 23 Sep 24 11:44 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-717494                | NoKubernetes-717494       | jenkins | v1.34.0 | 23 Sep 24 11:43 UTC | 23 Sep 24 11:44 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-193704          | kubernetes-upgrade-193704 | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC | 23 Sep 24 11:44 UTC |
	| ssh     | force-systemd-flag-936120 ssh cat     | force-systemd-flag-936120 | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC | 23 Sep 24 11:44 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-936120          | force-systemd-flag-936120 | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC | 23 Sep 24 11:44 UTC |
	| start   | -p kubernetes-upgrade-193704          | kubernetes-upgrade-193704 | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC | 23 Sep 24 11:45 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-516973             | cert-expiration-516973    | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC | 23 Sep 24 11:45 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-694064           | force-systemd-env-694064  | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC | 23 Sep 24 11:44 UTC |
	| start   | -p cert-options-796310                | cert-options-796310       | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC | 23 Sep 24 11:46 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-717494 sudo           | NoKubernetes-717494       | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-717494                | NoKubernetes-717494       | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC | 23 Sep 24 11:44 UTC |
	| start   | -p auto-283725 --memory=3072          | auto-283725               | jenkins | v1.34.0 | 23 Sep 24 11:44 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-193704          | kubernetes-upgrade-193704 | jenkins | v1.34.0 | 23 Sep 24 11:45 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-193704          | kubernetes-upgrade-193704 | jenkins | v1.34.0 | 23 Sep 24 11:45 UTC | 23 Sep 24 11:46 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-796310 ssh               | cert-options-796310       | jenkins | v1.34.0 | 23 Sep 24 11:46 UTC | 23 Sep 24 11:46 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-796310 -- sudo        | cert-options-796310       | jenkins | v1.34.0 | 23 Sep 24 11:46 UTC | 23 Sep 24 11:46 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-796310                | cert-options-796310       | jenkins | v1.34.0 | 23 Sep 24 11:46 UTC | 23 Sep 24 11:46 UTC |
	| start   | -p flannel-283725                     | flannel-283725            | jenkins | v1.34.0 | 23 Sep 24 11:46 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:46:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:46:23.448105   57554 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:46:23.448388   57554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:46:23.448397   57554 out.go:358] Setting ErrFile to fd 2...
	I0923 11:46:23.448402   57554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:46:23.448569   57554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:46:23.449112   57554 out.go:352] Setting JSON to false
	I0923 11:46:23.450082   57554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5326,"bootTime":1727086657,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:46:23.450177   57554 start.go:139] virtualization: kvm guest
	I0923 11:46:23.452260   57554 out.go:177] * [flannel-283725] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:46:23.453590   57554 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:46:23.453608   57554 notify.go:220] Checking for updates...
	I0923 11:46:23.455766   57554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:46:23.456993   57554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:46:23.458132   57554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:46:23.459249   57554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:46:23.460244   57554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:46:23.461965   57554 config.go:182] Loaded profile config "auto-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:46:23.462084   57554 config.go:182] Loaded profile config "cert-expiration-516973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:46:23.462216   57554 config.go:182] Loaded profile config "kubernetes-upgrade-193704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:46:23.462310   57554 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:46:23.499561   57554 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 11:46:23.500808   57554 start.go:297] selected driver: kvm2
	I0923 11:46:23.500821   57554 start.go:901] validating driver "kvm2" against <nil>
	I0923 11:46:23.500847   57554 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:46:23.501547   57554 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:46:23.501628   57554 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:46:23.517528   57554 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:46:23.517582   57554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:46:23.517818   57554 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:46:23.517849   57554 cni.go:84] Creating CNI manager for "flannel"
	I0923 11:46:23.517857   57554 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0923 11:46:23.517926   57554 start.go:340] cluster config:
	{Name:flannel-283725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-283725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:46:23.518032   57554 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:46:23.519936   57554 out.go:177] * Starting "flannel-283725" primary control-plane node in "flannel-283725" cluster
	I0923 11:46:24.295830   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:24.296562   56394 main.go:141] libmachine: (auto-283725) DBG | unable to find current IP address of domain auto-283725 in network mk-auto-283725
	I0923 11:46:24.296585   56394 main.go:141] libmachine: (auto-283725) DBG | I0923 11:46:24.296542   57146 retry.go:31] will retry after 4.165376697s: waiting for machine to come up
	I0923 11:46:23.521173   57554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:46:23.521217   57554 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 11:46:23.521228   57554 cache.go:56] Caching tarball of preloaded images
	I0923 11:46:23.521303   57554 preload.go:172] Found /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 11:46:23.521328   57554 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 11:46:23.521457   57554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/flannel-283725/config.json ...
	I0923 11:46:23.521483   57554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/flannel-283725/config.json: {Name:mk0b2fa660d56bce7db35438dbc355da9ae3578a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:23.521634   57554 start.go:360] acquireMachinesLock for flannel-283725: {Name:mkfb991351a9255e404db4d8f1990f914d698323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:46:29.818287   56777 start.go:364] duration metric: took 59.042908568s to acquireMachinesLock for "kubernetes-upgrade-193704"
	I0923 11:46:29.818369   56777 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:46:29.818382   56777 fix.go:54] fixHost starting: 
	I0923 11:46:29.818786   56777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:46:29.818831   56777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:46:29.838534   56777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0923 11:46:29.838964   56777 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:46:29.839489   56777 main.go:141] libmachine: Using API Version  1
	I0923 11:46:29.839543   56777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:46:29.839869   56777 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:46:29.840066   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:46:29.840183   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetState
	I0923 11:46:29.841851   56777 fix.go:112] recreateIfNeeded on kubernetes-upgrade-193704: state=Running err=<nil>
	W0923 11:46:29.841885   56777 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:46:29.843968   56777 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-193704" VM ...
	I0923 11:46:29.845099   56777 machine.go:93] provisionDockerMachine start ...
	I0923 11:46:29.845122   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:46:29.845301   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:29.847841   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:29.848370   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:29.848396   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:29.848580   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:29.848730   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:29.848883   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:29.849017   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:29.849170   56777 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:29.849405   56777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:46:29.849427   56777 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:46:29.964233   56777 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-193704
	
	I0923 11:46:29.964265   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetMachineName
	I0923 11:46:29.964517   56777 buildroot.go:166] provisioning hostname "kubernetes-upgrade-193704"
	I0923 11:46:29.964547   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetMachineName
	I0923 11:46:29.964716   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:29.967605   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:29.968045   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:29.968074   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:29.968221   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:29.968392   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:29.968523   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:29.968665   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:29.968842   56777 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:29.969068   56777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:46:29.969083   56777 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-193704 && echo "kubernetes-upgrade-193704" | sudo tee /etc/hostname
	I0923 11:46:30.088229   56777 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-193704
	
	I0923 11:46:30.088260   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:30.090992   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.091371   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:30.091399   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.091583   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:30.091725   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:30.091862   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:30.091971   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:30.092125   56777 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:30.092338   56777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:46:30.092355   56777 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-193704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-193704/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-193704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:46:30.211571   56777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:46:30.211600   56777 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 11:46:30.211644   56777 buildroot.go:174] setting up certificates
	I0923 11:46:30.211664   56777 provision.go:84] configureAuth start
	I0923 11:46:30.211689   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetMachineName
	I0923 11:46:30.211918   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetIP
	I0923 11:46:30.214700   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.215129   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:30.215157   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.215311   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:30.217470   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.217817   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:30.217857   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.217941   56777 provision.go:143] copyHostCerts
	I0923 11:46:30.218004   56777 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 11:46:30.218028   56777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:46:30.218103   56777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 11:46:30.218221   56777 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 11:46:30.218234   56777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:46:30.218264   56777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 11:46:30.218341   56777 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 11:46:30.218350   56777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:46:30.218376   56777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 11:46:30.218443   56777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-193704 san=[127.0.0.1 192.168.39.77 kubernetes-upgrade-193704 localhost minikube]
	I0923 11:46:30.518825   56777 provision.go:177] copyRemoteCerts
	I0923 11:46:30.518889   56777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:46:30.518911   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:30.521421   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.521746   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:30.521790   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.521938   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:30.522154   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:30.522330   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:30.522501   56777 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa Username:docker}
	I0923 11:46:30.610648   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:46:30.641118   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:46:30.669310   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0923 11:46:28.466768   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.467337   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has current primary IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.467373   56394 main.go:141] libmachine: (auto-283725) Found IP for machine: 192.168.72.153
	I0923 11:46:28.467386   56394 main.go:141] libmachine: (auto-283725) Reserving static IP address...
	I0923 11:46:28.467814   56394 main.go:141] libmachine: (auto-283725) DBG | unable to find host DHCP lease matching {name: "auto-283725", mac: "52:54:00:88:9c:29", ip: "192.168.72.153"} in network mk-auto-283725
	I0923 11:46:28.542309   56394 main.go:141] libmachine: (auto-283725) Reserved static IP address: 192.168.72.153
	I0923 11:46:28.542347   56394 main.go:141] libmachine: (auto-283725) DBG | Getting to WaitForSSH function...
	I0923 11:46:28.542379   56394 main.go:141] libmachine: (auto-283725) Waiting for SSH to be available...
	I0923 11:46:28.544650   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.545051   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:28.545082   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.545176   56394 main.go:141] libmachine: (auto-283725) DBG | Using SSH client type: external
	I0923 11:46:28.545202   56394 main.go:141] libmachine: (auto-283725) DBG | Using SSH private key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/auto-283725/id_rsa (-rw-------)
	I0923 11:46:28.545245   56394 main.go:141] libmachine: (auto-283725) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19689-3961/.minikube/machines/auto-283725/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 11:46:28.545253   56394 main.go:141] libmachine: (auto-283725) DBG | About to run SSH command:
	I0923 11:46:28.545261   56394 main.go:141] libmachine: (auto-283725) DBG | exit 0
	I0923 11:46:28.669960   56394 main.go:141] libmachine: (auto-283725) DBG | SSH cmd err, output: <nil>: 
	I0923 11:46:28.670228   56394 main.go:141] libmachine: (auto-283725) KVM machine creation complete!
	I0923 11:46:28.670550   56394 main.go:141] libmachine: (auto-283725) Calling .GetConfigRaw
	I0923 11:46:28.671091   56394 main.go:141] libmachine: (auto-283725) Calling .DriverName
	I0923 11:46:28.671277   56394 main.go:141] libmachine: (auto-283725) Calling .DriverName
	I0923 11:46:28.671423   56394 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 11:46:28.671433   56394 main.go:141] libmachine: (auto-283725) Calling .GetState
	I0923 11:46:28.672731   56394 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 11:46:28.672742   56394 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 11:46:28.672746   56394 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 11:46:28.672751   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:28.675141   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.675523   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:28.675553   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.675699   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:28.675859   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:28.675979   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:28.676110   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:28.676301   56394 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:28.676491   56394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I0923 11:46:28.676503   56394 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 11:46:28.776782   56394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:46:28.776814   56394 main.go:141] libmachine: Detecting the provisioner...
	I0923 11:46:28.776826   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:28.779649   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.780056   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:28.780084   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.780300   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:28.780461   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:28.780594   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:28.780700   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:28.780865   56394 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:28.781020   56394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I0923 11:46:28.781031   56394 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 11:46:28.886404   56394 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 11:46:28.886463   56394 main.go:141] libmachine: found compatible host: buildroot
	I0923 11:46:28.886469   56394 main.go:141] libmachine: Provisioning with buildroot...
	I0923 11:46:28.886476   56394 main.go:141] libmachine: (auto-283725) Calling .GetMachineName
	I0923 11:46:28.886718   56394 buildroot.go:166] provisioning hostname "auto-283725"
	I0923 11:46:28.886758   56394 main.go:141] libmachine: (auto-283725) Calling .GetMachineName
	I0923 11:46:28.886948   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:28.889682   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.890050   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:28.890074   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:28.890228   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:28.890396   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:28.890524   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:28.890681   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:28.890884   56394 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:28.891060   56394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I0923 11:46:28.891072   56394 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-283725 && echo "auto-283725" | sudo tee /etc/hostname
	I0923 11:46:29.008132   56394 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-283725
	
	I0923 11:46:29.008158   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:29.010553   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.010912   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.010938   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.011117   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:29.011265   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.011418   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.011558   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:29.011717   56394 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:29.011877   56394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I0923 11:46:29.011893   56394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-283725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-283725/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-283725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:46:29.122828   56394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:46:29.122858   56394 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3961/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3961/.minikube}
	I0923 11:46:29.122876   56394 buildroot.go:174] setting up certificates
	I0923 11:46:29.122884   56394 provision.go:84] configureAuth start
	I0923 11:46:29.122892   56394 main.go:141] libmachine: (auto-283725) Calling .GetMachineName
	I0923 11:46:29.123146   56394 main.go:141] libmachine: (auto-283725) Calling .GetIP
	I0923 11:46:29.125617   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.125977   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.126007   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.126162   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:29.128438   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.128759   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.128786   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.128921   56394 provision.go:143] copyHostCerts
	I0923 11:46:29.128981   56394 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem, removing ...
	I0923 11:46:29.128997   56394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem
	I0923 11:46:29.129053   56394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/ca.pem (1078 bytes)
	I0923 11:46:29.129168   56394 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem, removing ...
	I0923 11:46:29.129178   56394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem
	I0923 11:46:29.129198   56394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/cert.pem (1123 bytes)
	I0923 11:46:29.129268   56394 exec_runner.go:144] found /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem, removing ...
	I0923 11:46:29.129275   56394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem
	I0923 11:46:29.129294   56394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3961/.minikube/key.pem (1675 bytes)
	I0923 11:46:29.129359   56394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem org=jenkins.auto-283725 san=[127.0.0.1 192.168.72.153 auto-283725 localhost minikube]
	I0923 11:46:29.188404   56394 provision.go:177] copyRemoteCerts
	I0923 11:46:29.188489   56394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:46:29.188519   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:29.191095   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.191357   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.191380   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.191533   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:29.191676   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.191793   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:29.191889   56394 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/auto-283725/id_rsa Username:docker}
	I0923 11:46:29.272106   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0923 11:46:29.299764   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:46:29.327418   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:46:29.351815   56394 provision.go:87] duration metric: took 228.917593ms to configureAuth
	I0923 11:46:29.351842   56394 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:46:29.351992   56394 config.go:182] Loaded profile config "auto-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:46:29.352070   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:29.354781   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.355119   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.355147   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.355326   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:29.355521   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.355687   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.355818   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:29.355966   56394 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:29.356132   56394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I0923 11:46:29.356154   56394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 11:46:29.573541   56394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 11:46:29.573564   56394 main.go:141] libmachine: Checking connection to Docker...
	I0923 11:46:29.573571   56394 main.go:141] libmachine: (auto-283725) Calling .GetURL
	I0923 11:46:29.574790   56394 main.go:141] libmachine: (auto-283725) DBG | Using libvirt version 6000000
	I0923 11:46:29.576796   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.577101   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.577125   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.577288   56394 main.go:141] libmachine: Docker is up and running!
	I0923 11:46:29.577309   56394 main.go:141] libmachine: Reticulating splines...
	I0923 11:46:29.577317   56394 client.go:171] duration metric: took 25.482635179s to LocalClient.Create
	I0923 11:46:29.577343   56394 start.go:167] duration metric: took 25.482735204s to libmachine.API.Create "auto-283725"
	I0923 11:46:29.577365   56394 start.go:293] postStartSetup for "auto-283725" (driver="kvm2")
	I0923 11:46:29.577397   56394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:46:29.577419   56394 main.go:141] libmachine: (auto-283725) Calling .DriverName
	I0923 11:46:29.577648   56394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:46:29.577674   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:29.579828   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.580131   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.580157   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.580334   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:29.580538   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.580725   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:29.580867   56394 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/auto-283725/id_rsa Username:docker}
	I0923 11:46:29.666314   56394 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:46:29.671093   56394 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:46:29.671118   56394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 11:46:29.671190   56394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 11:46:29.671276   56394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 11:46:29.671412   56394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:46:29.681446   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:46:29.706715   56394 start.go:296] duration metric: took 129.33376ms for postStartSetup
	I0923 11:46:29.706768   56394 main.go:141] libmachine: (auto-283725) Calling .GetConfigRaw
	I0923 11:46:29.707318   56394 main.go:141] libmachine: (auto-283725) Calling .GetIP
	I0923 11:46:29.710143   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.710509   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.710544   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.710822   56394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/config.json ...
	I0923 11:46:29.711074   56394 start.go:128] duration metric: took 25.640541106s to createHost
	I0923 11:46:29.711103   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:29.713765   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.714087   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.714114   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.714282   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:29.714444   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.714589   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.714693   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:29.714802   56394 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:29.714962   56394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I0923 11:46:29.714971   56394 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:46:29.818120   56394 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091989.794791866
	
	I0923 11:46:29.818142   56394 fix.go:216] guest clock: 1727091989.794791866
	I0923 11:46:29.818163   56394 fix.go:229] Guest: 2024-09-23 11:46:29.794791866 +0000 UTC Remote: 2024-09-23 11:46:29.711088599 +0000 UTC m=+97.003821716 (delta=83.703267ms)
	I0923 11:46:29.818183   56394 fix.go:200] guest clock delta is within tolerance: 83.703267ms
	I0923 11:46:29.818187   56394 start.go:83] releasing machines lock for "auto-283725", held for 25.747824856s
	I0923 11:46:29.818209   56394 main.go:141] libmachine: (auto-283725) Calling .DriverName
	I0923 11:46:29.818474   56394 main.go:141] libmachine: (auto-283725) Calling .GetIP
	I0923 11:46:29.821205   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.821574   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.821629   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.821808   56394 main.go:141] libmachine: (auto-283725) Calling .DriverName
	I0923 11:46:29.822385   56394 main.go:141] libmachine: (auto-283725) Calling .DriverName
	I0923 11:46:29.822561   56394 main.go:141] libmachine: (auto-283725) Calling .DriverName
	I0923 11:46:29.822666   56394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:46:29.822703   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:29.822757   56394 ssh_runner.go:195] Run: cat /version.json
	I0923 11:46:29.822774   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHHostname
	I0923 11:46:29.825506   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.825600   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.825845   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.825870   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.825897   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:29.825908   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:29.825958   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:29.826157   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.826159   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHPort
	I0923 11:46:29.826361   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHKeyPath
	I0923 11:46:29.826385   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:29.826527   56394 main.go:141] libmachine: (auto-283725) Calling .GetSSHUsername
	I0923 11:46:29.826594   56394 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/auto-283725/id_rsa Username:docker}
	I0923 11:46:29.826646   56394 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/auto-283725/id_rsa Username:docker}
	I0923 11:46:29.925715   56394 ssh_runner.go:195] Run: systemctl --version
	I0923 11:46:29.931577   56394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 11:46:30.092725   56394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:46:30.099353   56394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:46:30.099431   56394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:46:30.118243   56394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 11:46:30.118264   56394 start.go:495] detecting cgroup driver to use...
	I0923 11:46:30.118333   56394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:46:30.137349   56394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:46:30.153112   56394 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:46:30.153174   56394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:46:30.168685   56394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:46:30.184244   56394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:46:30.312514   56394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:46:30.476242   56394 docker.go:233] disabling docker service ...
	I0923 11:46:30.476306   56394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:46:30.491104   56394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:46:30.505834   56394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:46:30.653927   56394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:46:30.797611   56394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:46:30.811533   56394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:46:30.832355   56394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 11:46:30.832441   56394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:30.844861   56394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 11:46:30.844915   56394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:30.856461   56394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:30.868906   56394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:30.881990   56394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:46:30.894891   56394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:30.907512   56394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:30.925566   56394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:30.935980   56394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:46:30.947246   56394 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 11:46:30.947311   56394 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 11:46:30.963537   56394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:46:30.975028   56394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:46:31.114113   56394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 11:46:31.210028   56394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 11:46:31.210097   56394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 11:46:31.214829   56394 start.go:563] Will wait 60s for crictl version
	I0923 11:46:31.214883   56394 ssh_runner.go:195] Run: which crictl
	I0923 11:46:31.218512   56394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:46:31.256120   56394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 11:46:31.256204   56394 ssh_runner.go:195] Run: crio --version
	I0923 11:46:31.283670   56394 ssh_runner.go:195] Run: crio --version
	I0923 11:46:31.313719   56394 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 11:46:31.314789   56394 main.go:141] libmachine: (auto-283725) Calling .GetIP
	I0923 11:46:31.317760   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:31.318181   56394 main.go:141] libmachine: (auto-283725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:9c:29", ip: ""} in network mk-auto-283725: {Iface:virbr4 ExpiryTime:2024-09-23 12:46:19 +0000 UTC Type:0 Mac:52:54:00:88:9c:29 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:auto-283725 Clientid:01:52:54:00:88:9c:29}
	I0923 11:46:31.318213   56394 main.go:141] libmachine: (auto-283725) DBG | domain auto-283725 has defined IP address 192.168.72.153 and MAC address 52:54:00:88:9c:29 in network mk-auto-283725
	I0923 11:46:31.318467   56394 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0923 11:46:31.322616   56394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:46:31.335831   56394 kubeadm.go:883] updating cluster {Name:auto-283725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:auto-283725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.153 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:46:31.335965   56394 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:46:31.336039   56394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:46:31.372755   56394 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 11:46:31.372832   56394 ssh_runner.go:195] Run: which lz4
	I0923 11:46:31.376875   56394 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 11:46:31.381009   56394 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 11:46:31.381042   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 11:46:30.702200   56777 provision.go:87] duration metric: took 490.520586ms to configureAuth
	I0923 11:46:30.702242   56777 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:46:30.702441   56777 config.go:182] Loaded profile config "kubernetes-upgrade-193704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:46:30.702549   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:30.705589   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.706048   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:30.706082   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:30.706292   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:30.706484   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:30.706649   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:30.706783   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:30.706973   56777 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:30.707175   56777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:46:30.707190   56777 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 11:46:32.750082   56394 crio.go:462] duration metric: took 1.373228815s to copy over tarball
	I0923 11:46:32.750157   56394 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 11:46:34.893393   56394 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143006453s)
	I0923 11:46:34.893434   56394 crio.go:469] duration metric: took 2.143321626s to extract the tarball
	I0923 11:46:34.893442   56394 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 11:46:34.931433   56394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:46:34.974012   56394 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:46:34.974040   56394 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:46:34.974047   56394 kubeadm.go:934] updating node { 192.168.72.153 8443 v1.31.1 crio true true} ...
	I0923 11:46:34.974546   56394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-283725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:auto-283725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:46:34.974652   56394 ssh_runner.go:195] Run: crio config
	I0923 11:46:35.023028   56394 cni.go:84] Creating CNI manager for ""
	I0923 11:46:35.023059   56394 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 11:46:35.023070   56394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:46:35.023096   56394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.153 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-283725 NodeName:auto-283725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:46:35.023289   56394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-283725"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:46:35.023366   56394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:46:35.033816   56394 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:46:35.033879   56394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:46:35.043870   56394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0923 11:46:35.061045   56394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:46:35.078097   56394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0923 11:46:35.095755   56394 ssh_runner.go:195] Run: grep 192.168.72.153	control-plane.minikube.internal$ /etc/hosts
	I0923 11:46:35.099684   56394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:46:35.112606   56394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:46:35.251737   56394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:46:35.269416   56394 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725 for IP: 192.168.72.153
	I0923 11:46:35.269441   56394 certs.go:194] generating shared ca certs ...
	I0923 11:46:35.269457   56394 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:35.269605   56394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 11:46:35.269642   56394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 11:46:35.269651   56394 certs.go:256] generating profile certs ...
	I0923 11:46:35.269709   56394 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/client.key
	I0923 11:46:35.269722   56394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/client.crt with IP's: []
	I0923 11:46:35.385783   56394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/client.crt ...
	I0923 11:46:35.385813   56394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/client.crt: {Name:mk54c110f129aa9266cff2849b67caf463096912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:35.385973   56394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/client.key ...
	I0923 11:46:35.385983   56394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/client.key: {Name:mk2508b596282f4b20baecfe4ada1400361f0c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:35.386056   56394 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.key.f4a72e13
	I0923 11:46:35.386073   56394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.crt.f4a72e13 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.153]
	I0923 11:46:35.547080   56394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.crt.f4a72e13 ...
	I0923 11:46:35.547111   56394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.crt.f4a72e13: {Name:mk2e734bc42224201f5f8a7077d132586a48885b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:35.547280   56394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.key.f4a72e13 ...
	I0923 11:46:35.547293   56394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.key.f4a72e13: {Name:mk586ad09c4c86608bdd603ba149d701f46e0fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:35.547378   56394 certs.go:381] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.crt.f4a72e13 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.crt
	I0923 11:46:35.547470   56394 certs.go:385] copying /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.key.f4a72e13 -> /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.key
	I0923 11:46:35.547536   56394 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/proxy-client.key
	I0923 11:46:35.547553   56394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/proxy-client.crt with IP's: []
	I0923 11:46:35.732938   56394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/proxy-client.crt ...
	I0923 11:46:35.732967   56394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/proxy-client.crt: {Name:mkc80094d5042e3954fe4d9eb88d77ce1a099e8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:35.733115   56394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/proxy-client.key ...
	I0923 11:46:35.733125   56394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/proxy-client.key: {Name:mk6c8fbf5eb453689936ec0ecfa6b6c9f0f372a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:35.733291   56394 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 11:46:35.733327   56394 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 11:46:35.733336   56394 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:46:35.733357   56394 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:46:35.733396   56394 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:46:35.733438   56394 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 11:46:35.733511   56394 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:46:35.734108   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:46:35.759415   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:46:35.783780   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:46:35.808447   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:46:35.833374   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0923 11:46:35.857716   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:46:35.882282   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:46:35.907789   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/auto-283725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 11:46:35.931303   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 11:46:35.973133   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 11:46:36.000278   56394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:46:36.030449   56394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:46:36.054420   56394 ssh_runner.go:195] Run: openssl version
	I0923 11:46:36.066404   56394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 11:46:36.080679   56394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 11:46:36.085655   56394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:46:36.085716   56394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 11:46:36.093825   56394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:46:36.106870   56394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:46:36.118172   56394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:46:36.122839   56394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:46:36.122897   56394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:46:36.128859   56394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:46:36.140257   56394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 11:46:36.152589   56394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 11:46:36.157080   56394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:46:36.157130   56394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 11:46:36.162835   56394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 11:46:36.174382   56394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:46:36.178462   56394 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:46:36.178527   56394 kubeadm.go:392] StartCluster: {Name:auto-283725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:auto-283725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.153 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:46:36.178628   56394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 11:46:36.178694   56394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:46:36.214553   56394 cri.go:89] found id: ""
	I0923 11:46:36.214619   56394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:46:36.224863   56394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:46:36.238784   56394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:46:36.253577   56394 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:46:36.253598   56394 kubeadm.go:157] found existing configuration files:
	
	I0923 11:46:36.253642   56394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:46:36.264097   56394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:46:36.264159   56394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:46:36.275901   56394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:46:36.286937   56394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:46:36.287008   56394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:46:36.296860   56394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:46:36.306332   56394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:46:36.306397   56394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:46:36.316893   56394 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:46:36.326347   56394 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:46:36.326409   56394 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:46:36.336023   56394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 11:46:36.393597   56394 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:46:36.393814   56394 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:46:36.503696   56394 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:46:36.503844   56394 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:46:36.503979   56394 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:46:36.512110   56394 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:46:36.657733   56394 out.go:235]   - Generating certificates and keys ...
	I0923 11:46:36.657850   56394 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:46:36.657913   56394 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:46:36.703122   56394 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:46:36.798323   56394 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:46:36.893865   56394 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:46:37.122315   56394 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:46:37.251195   56394 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:46:37.251438   56394 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-283725 localhost] and IPs [192.168.72.153 127.0.0.1 ::1]
	I0923 11:46:37.436125   56394 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:46:37.436336   56394 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-283725 localhost] and IPs [192.168.72.153 127.0.0.1 ::1]
	I0923 11:46:37.505034   56394 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:46:37.596038   56394 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:46:37.662676   56394 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:46:37.662844   56394 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:46:37.879234   57554 start.go:364] duration metric: took 14.357580703s to acquireMachinesLock for "flannel-283725"
	I0923 11:46:37.879305   57554 start.go:93] Provisioning new machine with config: &{Name:flannel-283725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:flannel-283725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 11:46:37.879436   57554 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 11:46:38.058429   57554 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 11:46:38.058709   57554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:46:38.058769   57554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:46:38.073733   57554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
	I0923 11:46:38.074173   57554 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:46:38.074684   57554 main.go:141] libmachine: Using API Version  1
	I0923 11:46:38.074706   57554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:46:38.075015   57554 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:46:38.075200   57554 main.go:141] libmachine: (flannel-283725) Calling .GetMachineName
	I0923 11:46:38.075376   57554 main.go:141] libmachine: (flannel-283725) Calling .DriverName
	I0923 11:46:38.075558   57554 start.go:159] libmachine.API.Create for "flannel-283725" (driver="kvm2")
	I0923 11:46:38.075612   57554 client.go:168] LocalClient.Create starting
	I0923 11:46:38.075647   57554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem
	I0923 11:46:38.075690   57554 main.go:141] libmachine: Decoding PEM data...
	I0923 11:46:38.075711   57554 main.go:141] libmachine: Parsing certificate...
	I0923 11:46:38.075778   57554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem
	I0923 11:46:38.075802   57554 main.go:141] libmachine: Decoding PEM data...
	I0923 11:46:38.075817   57554 main.go:141] libmachine: Parsing certificate...
	I0923 11:46:38.075839   57554 main.go:141] libmachine: Running pre-create checks...
	I0923 11:46:38.075850   57554 main.go:141] libmachine: (flannel-283725) Calling .PreCreateCheck
	I0923 11:46:38.076152   57554 main.go:141] libmachine: (flannel-283725) Calling .GetConfigRaw
	I0923 11:46:38.076545   57554 main.go:141] libmachine: Creating machine...
	I0923 11:46:38.076561   57554 main.go:141] libmachine: (flannel-283725) Calling .Create
	I0923 11:46:38.076666   57554 main.go:141] libmachine: (flannel-283725) Creating KVM machine...
	I0923 11:46:38.077780   57554 main.go:141] libmachine: (flannel-283725) DBG | found existing default KVM network
	I0923 11:46:38.078830   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:38.078672   57707 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e4:65:c2} reservation:<nil>}
	I0923 11:46:38.079611   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:38.079542   57707 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:82:98} reservation:<nil>}
	I0923 11:46:38.080902   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:38.080831   57707 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a50a0}
	I0923 11:46:38.080923   57554 main.go:141] libmachine: (flannel-283725) DBG | created network xml: 
	I0923 11:46:38.080931   57554 main.go:141] libmachine: (flannel-283725) DBG | <network>
	I0923 11:46:38.080935   57554 main.go:141] libmachine: (flannel-283725) DBG |   <name>mk-flannel-283725</name>
	I0923 11:46:38.080941   57554 main.go:141] libmachine: (flannel-283725) DBG |   <dns enable='no'/>
	I0923 11:46:38.080944   57554 main.go:141] libmachine: (flannel-283725) DBG |   
	I0923 11:46:38.080953   57554 main.go:141] libmachine: (flannel-283725) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0923 11:46:38.080961   57554 main.go:141] libmachine: (flannel-283725) DBG |     <dhcp>
	I0923 11:46:38.080971   57554 main.go:141] libmachine: (flannel-283725) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0923 11:46:38.080978   57554 main.go:141] libmachine: (flannel-283725) DBG |     </dhcp>
	I0923 11:46:38.080996   57554 main.go:141] libmachine: (flannel-283725) DBG |   </ip>
	I0923 11:46:38.081008   57554 main.go:141] libmachine: (flannel-283725) DBG |   
	I0923 11:46:38.081013   57554 main.go:141] libmachine: (flannel-283725) DBG | </network>
	I0923 11:46:38.081020   57554 main.go:141] libmachine: (flannel-283725) DBG | 
	I0923 11:46:38.104794   57554 main.go:141] libmachine: (flannel-283725) DBG | trying to create private KVM network mk-flannel-283725 192.168.61.0/24...
	I0923 11:46:38.176570   57554 main.go:141] libmachine: (flannel-283725) DBG | private KVM network mk-flannel-283725 192.168.61.0/24 created
	I0923 11:46:38.176616   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:38.176551   57707 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:46:38.176650   57554 main.go:141] libmachine: (flannel-283725) Setting up store path in /home/jenkins/minikube-integration/19689-3961/.minikube/machines/flannel-283725 ...
	I0923 11:46:38.176686   57554 main.go:141] libmachine: (flannel-283725) Building disk image from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 11:46:38.176715   57554 main.go:141] libmachine: (flannel-283725) Downloading /home/jenkins/minikube-integration/19689-3961/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 11:46:38.420628   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:38.420507   57707 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/flannel-283725/id_rsa...
	I0923 11:46:37.989416   56394 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:46:38.150590   56394 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:46:38.320959   56394 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:46:38.601030   56394 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:46:38.838584   56394 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:46:38.839082   56394 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:46:38.841683   56394 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:46:37.630810   56777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 11:46:37.630834   56777 machine.go:96] duration metric: took 7.785721408s to provisionDockerMachine
	I0923 11:46:37.630848   56777 start.go:293] postStartSetup for "kubernetes-upgrade-193704" (driver="kvm2")
	I0923 11:46:37.630861   56777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:46:37.630883   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:46:37.631244   56777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:46:37.631287   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:37.634273   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.634666   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:37.634693   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.634838   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:37.635101   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:37.635249   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:37.635394   56777 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa Username:docker}
	I0923 11:46:37.721200   56777 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:46:37.725758   56777 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:46:37.725788   56777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/addons for local assets ...
	I0923 11:46:37.725858   56777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3961/.minikube/files for local assets ...
	I0923 11:46:37.725947   56777 filesync.go:149] local asset: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem -> 111392.pem in /etc/ssl/certs
	I0923 11:46:37.726056   56777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:46:37.736433   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:46:37.765623   56777 start.go:296] duration metric: took 134.762687ms for postStartSetup
	I0923 11:46:37.765670   56777 fix.go:56] duration metric: took 7.947288686s for fixHost
	I0923 11:46:37.765696   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:37.768418   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.768778   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:37.768812   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.768953   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:37.769153   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:37.769289   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:37.769446   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:37.769590   56777 main.go:141] libmachine: Using SSH client type: native
	I0923 11:46:37.769770   56777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0923 11:46:37.769783   56777 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:46:37.879066   56777 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091997.849706841
	
	I0923 11:46:37.879093   56777 fix.go:216] guest clock: 1727091997.849706841
	I0923 11:46:37.879102   56777 fix.go:229] Guest: 2024-09-23 11:46:37.849706841 +0000 UTC Remote: 2024-09-23 11:46:37.765676187 +0000 UTC m=+67.129116385 (delta=84.030654ms)
	I0923 11:46:37.879143   56777 fix.go:200] guest clock delta is within tolerance: 84.030654ms
	I0923 11:46:37.879156   56777 start.go:83] releasing machines lock for "kubernetes-upgrade-193704", held for 8.060814473s
	I0923 11:46:37.879186   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:46:37.879457   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetIP
	I0923 11:46:37.882486   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.882885   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:37.882918   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.883091   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:46:37.883639   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:46:37.883807   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .DriverName
	I0923 11:46:37.883884   56777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:46:37.883938   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:37.884022   56777 ssh_runner.go:195] Run: cat /version.json
	I0923 11:46:37.884042   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHHostname
	I0923 11:46:37.886773   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.887013   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.887134   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:37.887171   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.887285   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:37.887383   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:37.887421   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:37.887422   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:37.887566   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHPort
	I0923 11:46:37.887654   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:37.887782   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHKeyPath
	I0923 11:46:37.887785   56777 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa Username:docker}
	I0923 11:46:37.887891   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetSSHUsername
	I0923 11:46:37.887981   56777 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/kubernetes-upgrade-193704/id_rsa Username:docker}
	I0923 11:46:37.962365   56777 ssh_runner.go:195] Run: systemctl --version
	I0923 11:46:37.988135   56777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 11:46:38.152693   56777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:46:38.159067   56777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:46:38.159137   56777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:46:38.173355   56777 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:46:38.173402   56777 start.go:495] detecting cgroup driver to use...
	I0923 11:46:38.173468   56777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:46:38.198371   56777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:46:38.219956   56777 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:46:38.220008   56777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:46:38.236720   56777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:46:38.257089   56777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:46:38.424519   56777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:46:38.577333   56777 docker.go:233] disabling docker service ...
	I0923 11:46:38.577439   56777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:46:38.598414   56777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:46:38.615437   56777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:46:38.825585   56777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:46:39.052721   56777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:46:39.113331   56777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:46:39.139812   56777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 11:46:39.139880   56777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:39.155249   56777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 11:46:39.155338   56777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:39.169562   56777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:39.179887   56777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:39.190417   56777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:46:39.201441   56777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:39.211715   56777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:39.241519   56777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 11:46:39.265850   56777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:46:39.279616   56777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:46:39.290342   56777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:46:39.476191   56777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 11:46:40.194270   56777 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 11:46:40.194345   56777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 11:46:40.200135   56777 start.go:563] Will wait 60s for crictl version
	I0923 11:46:40.200190   56777 ssh_runner.go:195] Run: which crictl
	I0923 11:46:40.204982   56777 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:46:40.257243   56777 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 11:46:40.257329   56777 ssh_runner.go:195] Run: crio --version
	I0923 11:46:40.288436   56777 ssh_runner.go:195] Run: crio --version
	I0923 11:46:40.324515   56777 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 11:46:40.325984   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) Calling .GetIP
	I0923 11:46:40.329244   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:40.329721   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:e9:38", ip: ""} in network mk-kubernetes-upgrade-193704: {Iface:virbr1 ExpiryTime:2024-09-23 12:45:01 +0000 UTC Type:0 Mac:52:54:00:6e:e9:38 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-193704 Clientid:01:52:54:00:6e:e9:38}
	I0923 11:46:40.329775   56777 main.go:141] libmachine: (kubernetes-upgrade-193704) DBG | domain kubernetes-upgrade-193704 has defined IP address 192.168.39.77 and MAC address 52:54:00:6e:e9:38 in network mk-kubernetes-upgrade-193704
	I0923 11:46:40.329989   56777 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 11:46:40.335688   56777 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-193704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-193704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:46:40.335803   56777 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 11:46:40.335862   56777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:46:40.382011   56777 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:46:40.382037   56777 crio.go:433] Images already preloaded, skipping extraction
	I0923 11:46:40.382096   56777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:46:40.430858   56777 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 11:46:40.430880   56777 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:46:40.430889   56777 kubeadm.go:934] updating node { 192.168.39.77 8443 v1.31.1 crio true true} ...
	I0923 11:46:40.431003   56777 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-193704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-193704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:46:40.431079   56777 ssh_runner.go:195] Run: crio config
	I0923 11:46:40.493100   56777 cni.go:84] Creating CNI manager for ""
	I0923 11:46:40.493121   56777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 11:46:40.493193   56777 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:46:40.493252   56777 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-193704 NodeName:kubernetes-upgrade-193704 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:46:40.493464   56777 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-193704"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:46:40.493523   56777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:46:40.506036   56777 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:46:40.506110   56777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:46:40.518698   56777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0923 11:46:40.537862   56777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:46:40.556432   56777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0923 11:46:40.577325   56777 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0923 11:46:40.582510   56777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:46:38.882449   56394 out.go:235]   - Booting up control plane ...
	I0923 11:46:38.882603   56394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:46:38.882734   56394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:46:38.882850   56394 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:46:38.882984   56394 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:46:38.883113   56394 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:46:38.883165   56394 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:46:39.071652   56394 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:46:39.071808   56394 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:46:39.573008   56394 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.381024ms
	I0923 11:46:39.573159   56394 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:46:38.788059   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:38.787930   57707 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/flannel-283725/flannel-283725.rawdisk...
	I0923 11:46:38.788086   57554 main.go:141] libmachine: (flannel-283725) DBG | Writing magic tar header
	I0923 11:46:38.788102   57554 main.go:141] libmachine: (flannel-283725) DBG | Writing SSH key tar header
	I0923 11:46:38.788114   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:38.788065   57707 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/flannel-283725 ...
	I0923 11:46:38.788183   57554 main.go:141] libmachine: (flannel-283725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines/flannel-283725
	I0923 11:46:38.788227   57554 main.go:141] libmachine: (flannel-283725) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines/flannel-283725 (perms=drwx------)
	I0923 11:46:38.788252   57554 main.go:141] libmachine: (flannel-283725) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube/machines (perms=drwxr-xr-x)
	I0923 11:46:38.788264   57554 main.go:141] libmachine: (flannel-283725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube/machines
	I0923 11:46:38.788280   57554 main.go:141] libmachine: (flannel-283725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:46:38.788294   57554 main.go:141] libmachine: (flannel-283725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19689-3961
	I0923 11:46:38.788308   57554 main.go:141] libmachine: (flannel-283725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 11:46:38.788321   57554 main.go:141] libmachine: (flannel-283725) DBG | Checking permissions on dir: /home/jenkins
	I0923 11:46:38.788332   57554 main.go:141] libmachine: (flannel-283725) DBG | Checking permissions on dir: /home
	I0923 11:46:38.788344   57554 main.go:141] libmachine: (flannel-283725) DBG | Skipping /home - not owner
	I0923 11:46:38.788355   57554 main.go:141] libmachine: (flannel-283725) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961/.minikube (perms=drwxr-xr-x)
	I0923 11:46:38.788370   57554 main.go:141] libmachine: (flannel-283725) Setting executable bit set on /home/jenkins/minikube-integration/19689-3961 (perms=drwxrwxr-x)
	I0923 11:46:38.788381   57554 main.go:141] libmachine: (flannel-283725) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 11:46:38.788391   57554 main.go:141] libmachine: (flannel-283725) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 11:46:38.788401   57554 main.go:141] libmachine: (flannel-283725) Creating domain...
	I0923 11:46:38.789605   57554 main.go:141] libmachine: (flannel-283725) define libvirt domain using xml: 
	I0923 11:46:38.789636   57554 main.go:141] libmachine: (flannel-283725) <domain type='kvm'>
	I0923 11:46:38.789651   57554 main.go:141] libmachine: (flannel-283725)   <name>flannel-283725</name>
	I0923 11:46:38.789663   57554 main.go:141] libmachine: (flannel-283725)   <memory unit='MiB'>3072</memory>
	I0923 11:46:38.789672   57554 main.go:141] libmachine: (flannel-283725)   <vcpu>2</vcpu>
	I0923 11:46:38.789679   57554 main.go:141] libmachine: (flannel-283725)   <features>
	I0923 11:46:38.789694   57554 main.go:141] libmachine: (flannel-283725)     <acpi/>
	I0923 11:46:38.789706   57554 main.go:141] libmachine: (flannel-283725)     <apic/>
	I0923 11:46:38.789714   57554 main.go:141] libmachine: (flannel-283725)     <pae/>
	I0923 11:46:38.789728   57554 main.go:141] libmachine: (flannel-283725)     
	I0923 11:46:38.789740   57554 main.go:141] libmachine: (flannel-283725)   </features>
	I0923 11:46:38.789747   57554 main.go:141] libmachine: (flannel-283725)   <cpu mode='host-passthrough'>
	I0923 11:46:38.789755   57554 main.go:141] libmachine: (flannel-283725)   
	I0923 11:46:38.789761   57554 main.go:141] libmachine: (flannel-283725)   </cpu>
	I0923 11:46:38.789803   57554 main.go:141] libmachine: (flannel-283725)   <os>
	I0923 11:46:38.789830   57554 main.go:141] libmachine: (flannel-283725)     <type>hvm</type>
	I0923 11:46:38.789843   57554 main.go:141] libmachine: (flannel-283725)     <boot dev='cdrom'/>
	I0923 11:46:38.789849   57554 main.go:141] libmachine: (flannel-283725)     <boot dev='hd'/>
	I0923 11:46:38.789858   57554 main.go:141] libmachine: (flannel-283725)     <bootmenu enable='no'/>
	I0923 11:46:38.789868   57554 main.go:141] libmachine: (flannel-283725)   </os>
	I0923 11:46:38.789876   57554 main.go:141] libmachine: (flannel-283725)   <devices>
	I0923 11:46:38.789883   57554 main.go:141] libmachine: (flannel-283725)     <disk type='file' device='cdrom'>
	I0923 11:46:38.789896   57554 main.go:141] libmachine: (flannel-283725)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/flannel-283725/boot2docker.iso'/>
	I0923 11:46:38.789907   57554 main.go:141] libmachine: (flannel-283725)       <target dev='hdc' bus='scsi'/>
	I0923 11:46:38.789918   57554 main.go:141] libmachine: (flannel-283725)       <readonly/>
	I0923 11:46:38.789927   57554 main.go:141] libmachine: (flannel-283725)     </disk>
	I0923 11:46:38.789947   57554 main.go:141] libmachine: (flannel-283725)     <disk type='file' device='disk'>
	I0923 11:46:38.789960   57554 main.go:141] libmachine: (flannel-283725)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 11:46:38.789976   57554 main.go:141] libmachine: (flannel-283725)       <source file='/home/jenkins/minikube-integration/19689-3961/.minikube/machines/flannel-283725/flannel-283725.rawdisk'/>
	I0923 11:46:38.789987   57554 main.go:141] libmachine: (flannel-283725)       <target dev='hda' bus='virtio'/>
	I0923 11:46:38.789998   57554 main.go:141] libmachine: (flannel-283725)     </disk>
	I0923 11:46:38.790009   57554 main.go:141] libmachine: (flannel-283725)     <interface type='network'>
	I0923 11:46:38.790022   57554 main.go:141] libmachine: (flannel-283725)       <source network='mk-flannel-283725'/>
	I0923 11:46:38.790033   57554 main.go:141] libmachine: (flannel-283725)       <model type='virtio'/>
	I0923 11:46:38.790042   57554 main.go:141] libmachine: (flannel-283725)     </interface>
	I0923 11:46:38.790053   57554 main.go:141] libmachine: (flannel-283725)     <interface type='network'>
	I0923 11:46:38.790063   57554 main.go:141] libmachine: (flannel-283725)       <source network='default'/>
	I0923 11:46:38.790074   57554 main.go:141] libmachine: (flannel-283725)       <model type='virtio'/>
	I0923 11:46:38.790085   57554 main.go:141] libmachine: (flannel-283725)     </interface>
	I0923 11:46:38.790095   57554 main.go:141] libmachine: (flannel-283725)     <serial type='pty'>
	I0923 11:46:38.790104   57554 main.go:141] libmachine: (flannel-283725)       <target port='0'/>
	I0923 11:46:38.790113   57554 main.go:141] libmachine: (flannel-283725)     </serial>
	I0923 11:46:38.790123   57554 main.go:141] libmachine: (flannel-283725)     <console type='pty'>
	I0923 11:46:38.790134   57554 main.go:141] libmachine: (flannel-283725)       <target type='serial' port='0'/>
	I0923 11:46:38.790145   57554 main.go:141] libmachine: (flannel-283725)     </console>
	I0923 11:46:38.790153   57554 main.go:141] libmachine: (flannel-283725)     <rng model='virtio'>
	I0923 11:46:38.790163   57554 main.go:141] libmachine: (flannel-283725)       <backend model='random'>/dev/random</backend>
	I0923 11:46:38.790182   57554 main.go:141] libmachine: (flannel-283725)     </rng>
	I0923 11:46:38.790193   57554 main.go:141] libmachine: (flannel-283725)     
	I0923 11:46:38.790200   57554 main.go:141] libmachine: (flannel-283725)     
	I0923 11:46:38.790210   57554 main.go:141] libmachine: (flannel-283725)   </devices>
	I0923 11:46:38.790217   57554 main.go:141] libmachine: (flannel-283725) </domain>
	I0923 11:46:38.790229   57554 main.go:141] libmachine: (flannel-283725) 
	I0923 11:46:38.862591   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:6a:f0:3c in network default
	I0923 11:46:38.863368   57554 main.go:141] libmachine: (flannel-283725) Ensuring networks are active...
	I0923 11:46:38.863395   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:83:56:18 in network mk-flannel-283725
	I0923 11:46:38.864290   57554 main.go:141] libmachine: (flannel-283725) Ensuring network default is active
	I0923 11:46:38.864716   57554 main.go:141] libmachine: (flannel-283725) Ensuring network mk-flannel-283725 is active
	I0923 11:46:38.865342   57554 main.go:141] libmachine: (flannel-283725) Getting domain xml...
	I0923 11:46:38.866170   57554 main.go:141] libmachine: (flannel-283725) Creating domain...
	I0923 11:46:40.483908   57554 main.go:141] libmachine: (flannel-283725) Waiting to get IP...
	I0923 11:46:40.484816   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:83:56:18 in network mk-flannel-283725
	I0923 11:46:40.485306   57554 main.go:141] libmachine: (flannel-283725) DBG | unable to find current IP address of domain flannel-283725 in network mk-flannel-283725
	I0923 11:46:40.485358   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:40.485291   57707 retry.go:31] will retry after 285.401753ms: waiting for machine to come up
	I0923 11:46:40.772974   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:83:56:18 in network mk-flannel-283725
	I0923 11:46:40.773692   57554 main.go:141] libmachine: (flannel-283725) DBG | unable to find current IP address of domain flannel-283725 in network mk-flannel-283725
	I0923 11:46:40.773860   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:40.773818   57707 retry.go:31] will retry after 308.511253ms: waiting for machine to come up
	I0923 11:46:41.084632   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:83:56:18 in network mk-flannel-283725
	I0923 11:46:41.085260   57554 main.go:141] libmachine: (flannel-283725) DBG | unable to find current IP address of domain flannel-283725 in network mk-flannel-283725
	I0923 11:46:41.085283   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:41.085192   57707 retry.go:31] will retry after 459.194192ms: waiting for machine to come up
	I0923 11:46:41.545509   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:83:56:18 in network mk-flannel-283725
	I0923 11:46:41.545989   57554 main.go:141] libmachine: (flannel-283725) DBG | unable to find current IP address of domain flannel-283725 in network mk-flannel-283725
	I0923 11:46:41.546013   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:41.545934   57707 retry.go:31] will retry after 453.281595ms: waiting for machine to come up
	I0923 11:46:42.001518   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:83:56:18 in network mk-flannel-283725
	I0923 11:46:42.002168   57554 main.go:141] libmachine: (flannel-283725) DBG | unable to find current IP address of domain flannel-283725 in network mk-flannel-283725
	I0923 11:46:42.002301   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:42.002234   57707 retry.go:31] will retry after 522.274213ms: waiting for machine to come up
	I0923 11:46:42.526118   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:83:56:18 in network mk-flannel-283725
	I0923 11:46:42.526709   57554 main.go:141] libmachine: (flannel-283725) DBG | unable to find current IP address of domain flannel-283725 in network mk-flannel-283725
	I0923 11:46:42.526736   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:42.526643   57707 retry.go:31] will retry after 706.922006ms: waiting for machine to come up
	I0923 11:46:43.235015   57554 main.go:141] libmachine: (flannel-283725) DBG | domain flannel-283725 has defined MAC address 52:54:00:83:56:18 in network mk-flannel-283725
	I0923 11:46:43.235520   57554 main.go:141] libmachine: (flannel-283725) DBG | unable to find current IP address of domain flannel-283725 in network mk-flannel-283725
	I0923 11:46:43.235577   57554 main.go:141] libmachine: (flannel-283725) DBG | I0923 11:46:43.235488   57707 retry.go:31] will retry after 797.990938ms: waiting for machine to come up
	I0923 11:46:45.570848   56394 kubeadm.go:310] [api-check] The API server is healthy after 6.001732555s
	I0923 11:46:45.586397   56394 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:46:45.605566   56394 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:46:45.635743   56394 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:46:45.636020   56394 kubeadm.go:310] [mark-control-plane] Marking the node auto-283725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:46:45.663852   56394 kubeadm.go:310] [bootstrap-token] Using token: oru1at.9f7n6dirfu8pdl13
	I0923 11:46:40.777640   56777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:46:40.794703   56777 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704 for IP: 192.168.39.77
	I0923 11:46:40.794729   56777 certs.go:194] generating shared ca certs ...
	I0923 11:46:40.794752   56777 certs.go:226] acquiring lock for ca certs: {Name:mk988b59d89b8a4200d4f61465c76df2fb71bb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:46:40.794950   56777 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key
	I0923 11:46:40.795003   56777 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key
	I0923 11:46:40.795015   56777 certs.go:256] generating profile certs ...
	I0923 11:46:40.795119   56777 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/client.key
	I0923 11:46:40.795185   56777 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.key.c7b3f995
	I0923 11:46:40.795234   56777 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.key
	I0923 11:46:40.795406   56777 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem (1338 bytes)
	W0923 11:46:40.795449   56777 certs.go:480] ignoring /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139_empty.pem, impossibly tiny 0 bytes
	I0923 11:46:40.795463   56777 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:46:40.795502   56777 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:46:40.795532   56777 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:46:40.795561   56777 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/certs/key.pem (1675 bytes)
	I0923 11:46:40.795614   56777 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem (1708 bytes)
	I0923 11:46:40.796479   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:46:40.836067   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:46:40.909413   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:46:40.986759   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:46:41.129296   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 11:46:41.419466   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 11:46:41.607638   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:46:41.658546   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/kubernetes-upgrade-193704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:46:41.885085   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/ssl/certs/111392.pem --> /usr/share/ca-certificates/111392.pem (1708 bytes)
	I0923 11:46:42.139154   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:46:42.263613   56777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3961/.minikube/certs/11139.pem --> /usr/share/ca-certificates/11139.pem (1338 bytes)
	I0923 11:46:42.360801   56777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:46:42.425037   56777 ssh_runner.go:195] Run: openssl version
	I0923 11:46:42.436280   56777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111392.pem && ln -fs /usr/share/ca-certificates/111392.pem /etc/ssl/certs/111392.pem"
	I0923 11:46:42.457748   56777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111392.pem
	I0923 11:46:42.467701   56777 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:38 /usr/share/ca-certificates/111392.pem
	I0923 11:46:42.467768   56777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111392.pem
	I0923 11:46:42.477799   56777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111392.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:46:42.494583   56777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:46:42.512846   56777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:46:42.519168   56777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:46:42.519288   56777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:46:42.529157   56777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:46:42.548048   56777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11139.pem && ln -fs /usr/share/ca-certificates/11139.pem /etc/ssl/certs/11139.pem"
	I0923 11:46:42.571524   56777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11139.pem
	I0923 11:46:42.580254   56777 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:38 /usr/share/ca-certificates/11139.pem
	I0923 11:46:42.580320   56777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11139.pem
	I0923 11:46:42.589541   56777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11139.pem /etc/ssl/certs/51391683.0"
	I0923 11:46:42.606750   56777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:46:42.614360   56777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:46:42.622954   56777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:46:42.631706   56777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:46:42.642102   56777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:46:42.648685   56777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:46:42.658563   56777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:46:42.669364   56777 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-193704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-193704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:46:42.669508   56777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 11:46:42.669591   56777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:46:42.753698   56777 cri.go:89] found id: "e3e51fb17fbaaa7facd8f3bfe54235dca4b35c6785f0f84a4f40a3700c7a8577"
	I0923 11:46:42.753723   56777 cri.go:89] found id: "27e23c1d6d9e09383ff8f5a20b6ed4797062aa8d7cbb62f74ec64f6405a12937"
	I0923 11:46:42.753729   56777 cri.go:89] found id: "71b9f16c67eb7c797de9511c5478364b6e68f04947a1a73333bca62c5cad5cc5"
	I0923 11:46:42.753733   56777 cri.go:89] found id: "6d7ff363f1aaf432506edbd0b1f87b3a9e7961340e68cdf1192cfdd03f68c42f"
	I0923 11:46:42.753738   56777 cri.go:89] found id: "0a0b4d231c8d6cecd69cfe7c8e7f8838ce1155e4d452daea5672b908fa6e6daa"
	I0923 11:46:42.753742   56777 cri.go:89] found id: "6314b1eb05a3d661e30b618d1e42133742d5fb18eba4db3f3d6409335f3e67e0"
	I0923 11:46:42.753746   56777 cri.go:89] found id: "2b992b6c6a47827b7a31a2d3fe92de5f67d4540b26ed3971807fb2e09f5741b4"
	I0923 11:46:42.753750   56777 cri.go:89] found id: "ac8b73d4ccc1e95250af912c69505e1cb321e08f685f572560deb714b1ee7ac9"
	I0923 11:46:42.753755   56777 cri.go:89] found id: "e8083b2dba27bd6e807553ab9e2da480c39752d4c57b03d521d7d53def430876"
	I0923 11:46:42.753761   56777 cri.go:89] found id: "c081fdd7814eab142733887db30688ef2b58fa53537a4d6f99e1e1b026247c24"
	I0923 11:46:42.753765   56777 cri.go:89] found id: "d493467d7318fe4e935e7a566204a9eb72839bf6fa2731b1ad7f8780fb1f7693"
	I0923 11:46:42.753769   56777 cri.go:89] found id: "87812e934cf000f7619a5fe354bbe907d4fb4ce0809a40c020f7ac0a24c8fef2"
	I0923 11:46:42.753773   56777 cri.go:89] found id: "723438d45d391e187f71cfb06c31aa724c7e9221748d57ee4aece5052f2325b4"
	I0923 11:46:42.753777   56777 cri.go:89] found id: ""
	I0923 11:46:42.753825   56777 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.747973408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727092015747950911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e488f9fb-3ff6-4838-8c9d-341107f56bca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.748574322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d307d1e9-ff58-4d42-b8be-d02fe5929984 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.748645392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d307d1e9-ff58-4d42-b8be-d02fe5929984 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.749060635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a84b2aad5e9fcc536d8c9f3a5966e62281225215e1920d31803818cc561d9eb3,PodSandboxId:1b1f98e94bd6526d5919b1909aeeffec6201abfa9403f12ec2accfa97745ac5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727092012683835059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjgk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7dc360-19b3-427a-a276-c33272a9319c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0168fd8dc8aa7516df2f2969502e099e5861a9a8cb34be66c05df2c9d3d86910,PodSandboxId:4b1244e49d9f072c48d4c870edbaff3ca859cf215bf181470255d4a8e6fa795c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727092012659613634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c22af7c-2c17-4c27-a55a-579d87a80521,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76519a1b21988f82259a8ade52a0ba4d68a772c6fdf7ea612731ce222f57daff,PodSandboxId:402655583e89a41e619079d37c6fb3dc9f63a7a4a3ff108ad230f5aa76862cbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727092008879504514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be65b1c44b45088d83a7066260b0fa36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71b211d507df6b2abaf9e1a9a7a1886e027d47aa535bbb344e27e4f747d4988,PodSandboxId:7a508518b8f7cdbae48f761da499e211f9740a7b306c802951389347a23dc9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727092008887750587,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc57a1e712124554d9860f1a4d5c51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550f8ed8b5db3d548b2eb1bd6e176cdce6503a16b4ebb446ae1157d50292e947,PodSandboxId:b28c4d21815dd08c28ed8e43129ae15349be18b417b048b5e729725bb1dd518c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727092008902793820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49073ffda8dfa930fbbeac0be6f98550,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7ce12087cb3285d3fddc3c4a9d0a36526381cbcfe5e3c9f539489cb20043a1,PodSandboxId:2944fe831b394460150d201e4b4319671fd7349afcf38fd78a4dae5871029d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727092008896270299,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d0d97df8e5ced16da241edd5f37053,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4beece0541e4f22b39fc9c3e6219a25a3fb0ae3a427083ed591192e33db75d0,PodSandboxId:abc82fab3b2d46caaba52b17a7a618c52df58f888e12ce39b2f857ea31f324a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727092002986902427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-582tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037c3a75-3a2c-4ec6-a063-d2a147dbce92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b0eebf7d3ae8bbfd24d9c85874258e7f11cadab7fb9364b6684e2417cdf809,PodSandboxId:4d97c4ea9a13a85539a620480a9cb76f5e1f0b7e2652ffaa38e0506726afabc5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727092002887969416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ljwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068d5bdb-c208-4ce6-b698-4533537ae525,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e51fb17fbaaa7facd8f3bfe54235dca4b35c6785f0f84a4f40a3700c7a8577,PodSandboxId:4b1244e49d9f072c48d4c870edbaff3ca859cf215bf181470255d4a8e6fa795c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727092001452156308,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c22af7c-2c17-4c27-a55a-579d87a80521,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b9f16c67eb7c797de9511c5478364b6e68f04947a1a73333bca62c5cad5cc5,PodSandboxId:2944fe831b394460150d201e4b4319671fd7349afcf38fd78a4dae5871029d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727092001333701273,Labels:map[string]string{io.kubernete
s.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d0d97df8e5ced16da241edd5f37053,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ff363f1aaf432506edbd0b1f87b3a9e7961340e68cdf1192cfdd03f68c42f,PodSandboxId:7a508518b8f7cdbae48f761da499e211f9740a7b306c802951389347a23dc9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727092001321362881,Labels:map[string]string{io.kubern
etes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc57a1e712124554d9860f1a4d5c51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e23c1d6d9e09383ff8f5a20b6ed4797062aa8d7cbb62f74ec64f6405a12937,PodSandboxId:402655583e89a41e619079d37c6fb3dc9f63a7a4a3ff108ad230f5aa76862cbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727092001353630540,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be65b1c44b45088d83a7066260b0fa36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b4d231c8d6cecd69cfe7c8e7f8838ce1155e4d452daea5672b908fa6e6daa,PodSandboxId:b28c4d21815dd08c28ed8e43129ae15349be18b417b048b5e729725bb1dd518c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727092001258570303,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49073ffda8dfa930fbbeac0be6f98550,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6314b1eb05a3d661e30b618d1e42133742d5fb18eba4db3f3d6409335f3e67e0,PodSandboxId:be1ca690cb18fb7fd8678f966e527f460c354e6098c710cb0f8fc4cd544b05fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727091999472119542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io
.kubernetes.pod.name: kube-proxy-wjgk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7dc360-19b3-427a-a276-c33272a9319c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8b73d4ccc1e95250af912c69505e1cb321e08f685f572560deb714b1ee7ac9,PodSandboxId:954336b2717ffa1d5584f5a3c4d736c4efff073dba5ef72c10cf2c16d6cb3e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727091932132805843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9
-582tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037c3a75-3a2c-4ec6-a063-d2a147dbce92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8083b2dba27bd6e807553ab9e2da480c39752d4c57b03d521d7d53def430876,PodSandboxId:0b38ff86fda1a4b4d1cd190bbf4a55ffd5ca0b1ffcf2d061ec04f51faad9f79b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727091932096095326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ljwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068d5bdb-c208-4ce6-b698-4533537ae525,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d307d1e9-ff58-4d42-b8be-d02fe5929984 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.796389310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20944d94-200b-4277-bcb2-86914cd9a939 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.796507705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20944d94-200b-4277-bcb2-86914cd9a939 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.797347016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7c5705a-3f74-4c00-8bbc-2c47c6ff37f5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.797704057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727092015797683501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7c5705a-3f74-4c00-8bbc-2c47c6ff37f5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.798188429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74fd7feb-45a0-4354-846f-1f2384bc26e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.798270042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74fd7feb-45a0-4354-846f-1f2384bc26e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.798596624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a84b2aad5e9fcc536d8c9f3a5966e62281225215e1920d31803818cc561d9eb3,PodSandboxId:1b1f98e94bd6526d5919b1909aeeffec6201abfa9403f12ec2accfa97745ac5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727092012683835059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjgk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7dc360-19b3-427a-a276-c33272a9319c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0168fd8dc8aa7516df2f2969502e099e5861a9a8cb34be66c05df2c9d3d86910,PodSandboxId:4b1244e49d9f072c48d4c870edbaff3ca859cf215bf181470255d4a8e6fa795c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727092012659613634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c22af7c-2c17-4c27-a55a-579d87a80521,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76519a1b21988f82259a8ade52a0ba4d68a772c6fdf7ea612731ce222f57daff,PodSandboxId:402655583e89a41e619079d37c6fb3dc9f63a7a4a3ff108ad230f5aa76862cbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727092008879504514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be65b1c44b45088d83a7066260b0fa36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71b211d507df6b2abaf9e1a9a7a1886e027d47aa535bbb344e27e4f747d4988,PodSandboxId:7a508518b8f7cdbae48f761da499e211f9740a7b306c802951389347a23dc9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727092008887750587,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc57a1e712124554d9860f1a4d5c51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550f8ed8b5db3d548b2eb1bd6e176cdce6503a16b4ebb446ae1157d50292e947,PodSandboxId:b28c4d21815dd08c28ed8e43129ae15349be18b417b048b5e729725bb1dd518c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727092008902793820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49073ffda8dfa930fbbeac0be6f98550,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7ce12087cb3285d3fddc3c4a9d0a36526381cbcfe5e3c9f539489cb20043a1,PodSandboxId:2944fe831b394460150d201e4b4319671fd7349afcf38fd78a4dae5871029d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727092008896270299,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d0d97df8e5ced16da241edd5f37053,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4beece0541e4f22b39fc9c3e6219a25a3fb0ae3a427083ed591192e33db75d0,PodSandboxId:abc82fab3b2d46caaba52b17a7a618c52df58f888e12ce39b2f857ea31f324a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727092002986902427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-582tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037c3a75-3a2c-4ec6-a063-d2a147dbce92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b0eebf7d3ae8bbfd24d9c85874258e7f11cadab7fb9364b6684e2417cdf809,PodSandboxId:4d97c4ea9a13a85539a620480a9cb76f5e1f0b7e2652ffaa38e0506726afabc5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727092002887969416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ljwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068d5bdb-c208-4ce6-b698-4533537ae525,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e51fb17fbaaa7facd8f3bfe54235dca4b35c6785f0f84a4f40a3700c7a8577,PodSandboxId:4b1244e49d9f072c48d4c870edbaff3ca859cf215bf181470255d4a8e6fa795c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727092001452156308,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c22af7c-2c17-4c27-a55a-579d87a80521,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b9f16c67eb7c797de9511c5478364b6e68f04947a1a73333bca62c5cad5cc5,PodSandboxId:2944fe831b394460150d201e4b4319671fd7349afcf38fd78a4dae5871029d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727092001333701273,Labels:map[string]string{io.kubernete
s.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d0d97df8e5ced16da241edd5f37053,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ff363f1aaf432506edbd0b1f87b3a9e7961340e68cdf1192cfdd03f68c42f,PodSandboxId:7a508518b8f7cdbae48f761da499e211f9740a7b306c802951389347a23dc9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727092001321362881,Labels:map[string]string{io.kubern
etes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc57a1e712124554d9860f1a4d5c51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e23c1d6d9e09383ff8f5a20b6ed4797062aa8d7cbb62f74ec64f6405a12937,PodSandboxId:402655583e89a41e619079d37c6fb3dc9f63a7a4a3ff108ad230f5aa76862cbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727092001353630540,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be65b1c44b45088d83a7066260b0fa36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b4d231c8d6cecd69cfe7c8e7f8838ce1155e4d452daea5672b908fa6e6daa,PodSandboxId:b28c4d21815dd08c28ed8e43129ae15349be18b417b048b5e729725bb1dd518c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727092001258570303,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49073ffda8dfa930fbbeac0be6f98550,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6314b1eb05a3d661e30b618d1e42133742d5fb18eba4db3f3d6409335f3e67e0,PodSandboxId:be1ca690cb18fb7fd8678f966e527f460c354e6098c710cb0f8fc4cd544b05fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727091999472119542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io
.kubernetes.pod.name: kube-proxy-wjgk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7dc360-19b3-427a-a276-c33272a9319c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8b73d4ccc1e95250af912c69505e1cb321e08f685f572560deb714b1ee7ac9,PodSandboxId:954336b2717ffa1d5584f5a3c4d736c4efff073dba5ef72c10cf2c16d6cb3e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727091932132805843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9
-582tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037c3a75-3a2c-4ec6-a063-d2a147dbce92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8083b2dba27bd6e807553ab9e2da480c39752d4c57b03d521d7d53def430876,PodSandboxId:0b38ff86fda1a4b4d1cd190bbf4a55ffd5ca0b1ffcf2d061ec04f51faad9f79b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727091932096095326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ljwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068d5bdb-c208-4ce6-b698-4533537ae525,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74fd7feb-45a0-4354-846f-1f2384bc26e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.844680908Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8277913-3e25-431d-b3e7-628ab391cf70 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.844780383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8277913-3e25-431d-b3e7-628ab391cf70 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.846555594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60fc949f-e553-42f4-8aa2-167989d2fbe2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.848065917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727092015847973302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60fc949f-e553-42f4-8aa2-167989d2fbe2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.848920565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3abd5a16-b6ca-4f9a-b55c-9b6a7cd00116 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.849080216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3abd5a16-b6ca-4f9a-b55c-9b6a7cd00116 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.849516294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a84b2aad5e9fcc536d8c9f3a5966e62281225215e1920d31803818cc561d9eb3,PodSandboxId:1b1f98e94bd6526d5919b1909aeeffec6201abfa9403f12ec2accfa97745ac5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727092012683835059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjgk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7dc360-19b3-427a-a276-c33272a9319c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0168fd8dc8aa7516df2f2969502e099e5861a9a8cb34be66c05df2c9d3d86910,PodSandboxId:4b1244e49d9f072c48d4c870edbaff3ca859cf215bf181470255d4a8e6fa795c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727092012659613634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c22af7c-2c17-4c27-a55a-579d87a80521,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76519a1b21988f82259a8ade52a0ba4d68a772c6fdf7ea612731ce222f57daff,PodSandboxId:402655583e89a41e619079d37c6fb3dc9f63a7a4a3ff108ad230f5aa76862cbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727092008879504514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be65b1c44b45088d83a7066260b0fa36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71b211d507df6b2abaf9e1a9a7a1886e027d47aa535bbb344e27e4f747d4988,PodSandboxId:7a508518b8f7cdbae48f761da499e211f9740a7b306c802951389347a23dc9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727092008887750587,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc57a1e712124554d9860f1a4d5c51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550f8ed8b5db3d548b2eb1bd6e176cdce6503a16b4ebb446ae1157d50292e947,PodSandboxId:b28c4d21815dd08c28ed8e43129ae15349be18b417b048b5e729725bb1dd518c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727092008902793820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49073ffda8dfa930fbbeac0be6f98550,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7ce12087cb3285d3fddc3c4a9d0a36526381cbcfe5e3c9f539489cb20043a1,PodSandboxId:2944fe831b394460150d201e4b4319671fd7349afcf38fd78a4dae5871029d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727092008896270299,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d0d97df8e5ced16da241edd5f37053,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4beece0541e4f22b39fc9c3e6219a25a3fb0ae3a427083ed591192e33db75d0,PodSandboxId:abc82fab3b2d46caaba52b17a7a618c52df58f888e12ce39b2f857ea31f324a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727092002986902427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-582tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037c3a75-3a2c-4ec6-a063-d2a147dbce92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b0eebf7d3ae8bbfd24d9c85874258e7f11cadab7fb9364b6684e2417cdf809,PodSandboxId:4d97c4ea9a13a85539a620480a9cb76f5e1f0b7e2652ffaa38e0506726afabc5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727092002887969416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ljwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068d5bdb-c208-4ce6-b698-4533537ae525,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e51fb17fbaaa7facd8f3bfe54235dca4b35c6785f0f84a4f40a3700c7a8577,PodSandboxId:4b1244e49d9f072c48d4c870edbaff3ca859cf215bf181470255d4a8e6fa795c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727092001452156308,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c22af7c-2c17-4c27-a55a-579d87a80521,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b9f16c67eb7c797de9511c5478364b6e68f04947a1a73333bca62c5cad5cc5,PodSandboxId:2944fe831b394460150d201e4b4319671fd7349afcf38fd78a4dae5871029d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727092001333701273,Labels:map[string]string{io.kubernete
s.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d0d97df8e5ced16da241edd5f37053,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ff363f1aaf432506edbd0b1f87b3a9e7961340e68cdf1192cfdd03f68c42f,PodSandboxId:7a508518b8f7cdbae48f761da499e211f9740a7b306c802951389347a23dc9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727092001321362881,Labels:map[string]string{io.kubern
etes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc57a1e712124554d9860f1a4d5c51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e23c1d6d9e09383ff8f5a20b6ed4797062aa8d7cbb62f74ec64f6405a12937,PodSandboxId:402655583e89a41e619079d37c6fb3dc9f63a7a4a3ff108ad230f5aa76862cbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727092001353630540,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be65b1c44b45088d83a7066260b0fa36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b4d231c8d6cecd69cfe7c8e7f8838ce1155e4d452daea5672b908fa6e6daa,PodSandboxId:b28c4d21815dd08c28ed8e43129ae15349be18b417b048b5e729725bb1dd518c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727092001258570303,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49073ffda8dfa930fbbeac0be6f98550,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6314b1eb05a3d661e30b618d1e42133742d5fb18eba4db3f3d6409335f3e67e0,PodSandboxId:be1ca690cb18fb7fd8678f966e527f460c354e6098c710cb0f8fc4cd544b05fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727091999472119542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io
.kubernetes.pod.name: kube-proxy-wjgk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7dc360-19b3-427a-a276-c33272a9319c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8b73d4ccc1e95250af912c69505e1cb321e08f685f572560deb714b1ee7ac9,PodSandboxId:954336b2717ffa1d5584f5a3c4d736c4efff073dba5ef72c10cf2c16d6cb3e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727091932132805843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9
-582tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037c3a75-3a2c-4ec6-a063-d2a147dbce92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8083b2dba27bd6e807553ab9e2da480c39752d4c57b03d521d7d53def430876,PodSandboxId:0b38ff86fda1a4b4d1cd190bbf4a55ffd5ca0b1ffcf2d061ec04f51faad9f79b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727091932096095326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ljwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068d5bdb-c208-4ce6-b698-4533537ae525,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3abd5a16-b6ca-4f9a-b55c-9b6a7cd00116 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.887539681Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10adccb1-7276-4380-b50e-3cecb0507fb2 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.887629229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10adccb1-7276-4380-b50e-3cecb0507fb2 name=/runtime.v1.RuntimeService/Version
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.888802946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd9ddf21-9bf4-48ec-a4bb-55b4c8c1fe77 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.889258483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727092015889229319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd9ddf21-9bf4-48ec-a4bb-55b4c8c1fe77 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.889652318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1842dee3-ed2a-4279-89de-32233b722e3f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.889735740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1842dee3-ed2a-4279-89de-32233b722e3f name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 11:46:55 kubernetes-upgrade-193704 crio[2643]: time="2024-09-23 11:46:55.890131487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a84b2aad5e9fcc536d8c9f3a5966e62281225215e1920d31803818cc561d9eb3,PodSandboxId:1b1f98e94bd6526d5919b1909aeeffec6201abfa9403f12ec2accfa97745ac5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727092012683835059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjgk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7dc360-19b3-427a-a276-c33272a9319c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0168fd8dc8aa7516df2f2969502e099e5861a9a8cb34be66c05df2c9d3d86910,PodSandboxId:4b1244e49d9f072c48d4c870edbaff3ca859cf215bf181470255d4a8e6fa795c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727092012659613634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c22af7c-2c17-4c27-a55a-579d87a80521,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76519a1b21988f82259a8ade52a0ba4d68a772c6fdf7ea612731ce222f57daff,PodSandboxId:402655583e89a41e619079d37c6fb3dc9f63a7a4a3ff108ad230f5aa76862cbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727092008879504514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be65b1c44b45088d83a7066260b0fa36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71b211d507df6b2abaf9e1a9a7a1886e027d47aa535bbb344e27e4f747d4988,PodSandboxId:7a508518b8f7cdbae48f761da499e211f9740a7b306c802951389347a23dc9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727092008887750587,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc57a1e712124554d9860f1a4d5c51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:550f8ed8b5db3d548b2eb1bd6e176cdce6503a16b4ebb446ae1157d50292e947,PodSandboxId:b28c4d21815dd08c28ed8e43129ae15349be18b417b048b5e729725bb1dd518c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727092008902793820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49073ffda8dfa930fbbeac0be6f98550,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7ce12087cb3285d3fddc3c4a9d0a36526381cbcfe5e3c9f539489cb20043a1,PodSandboxId:2944fe831b394460150d201e4b4319671fd7349afcf38fd78a4dae5871029d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727092008896270299,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d0d97df8e5ced16da241edd5f37053,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4beece0541e4f22b39fc9c3e6219a25a3fb0ae3a427083ed591192e33db75d0,PodSandboxId:abc82fab3b2d46caaba52b17a7a618c52df58f888e12ce39b2f857ea31f324a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727092002986902427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-582tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037c3a75-3a2c-4ec6-a063-d2a147dbce92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b0eebf7d3ae8bbfd24d9c85874258e7f11cadab7fb9364b6684e2417cdf809,PodSandboxId:4d97c4ea9a13a85539a620480a9cb76f5e1f0b7e2652ffaa38e0506726afabc5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727092002887969416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ljwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068d5bdb-c208-4ce6-b698-4533537ae525,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e51fb17fbaaa7facd8f3bfe54235dca4b35c6785f0f84a4f40a3700c7a8577,PodSandboxId:4b1244e49d9f072c48d4c870edbaff3ca859cf215bf181470255d4a8e6fa795c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727092001452156308,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c22af7c-2c17-4c27-a55a-579d87a80521,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b9f16c67eb7c797de9511c5478364b6e68f04947a1a73333bca62c5cad5cc5,PodSandboxId:2944fe831b394460150d201e4b4319671fd7349afcf38fd78a4dae5871029d24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727092001333701273,Labels:map[string]string{io.kubernete
s.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d0d97df8e5ced16da241edd5f37053,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ff363f1aaf432506edbd0b1f87b3a9e7961340e68cdf1192cfdd03f68c42f,PodSandboxId:7a508518b8f7cdbae48f761da499e211f9740a7b306c802951389347a23dc9af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727092001321362881,Labels:map[string]string{io.kubern
etes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc57a1e712124554d9860f1a4d5c51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e23c1d6d9e09383ff8f5a20b6ed4797062aa8d7cbb62f74ec64f6405a12937,PodSandboxId:402655583e89a41e619079d37c6fb3dc9f63a7a4a3ff108ad230f5aa76862cbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727092001353630540,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be65b1c44b45088d83a7066260b0fa36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b4d231c8d6cecd69cfe7c8e7f8838ce1155e4d452daea5672b908fa6e6daa,PodSandboxId:b28c4d21815dd08c28ed8e43129ae15349be18b417b048b5e729725bb1dd518c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727092001258570303,Labels:map[string]string{io.kubernetes
.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-193704,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49073ffda8dfa930fbbeac0be6f98550,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6314b1eb05a3d661e30b618d1e42133742d5fb18eba4db3f3d6409335f3e67e0,PodSandboxId:be1ca690cb18fb7fd8678f966e527f460c354e6098c710cb0f8fc4cd544b05fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727091999472119542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io
.kubernetes.pod.name: kube-proxy-wjgk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7dc360-19b3-427a-a276-c33272a9319c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8b73d4ccc1e95250af912c69505e1cb321e08f685f572560deb714b1ee7ac9,PodSandboxId:954336b2717ffa1d5584f5a3c4d736c4efff073dba5ef72c10cf2c16d6cb3e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727091932132805843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9
-582tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037c3a75-3a2c-4ec6-a063-d2a147dbce92,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8083b2dba27bd6e807553ab9e2da480c39752d4c57b03d521d7d53def430876,PodSandboxId:0b38ff86fda1a4b4d1cd190bbf4a55ffd5ca0b1ffcf2d061ec04f51faad9f79b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727091932096095326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ljwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 068d5bdb-c208-4ce6-b698-4533537ae525,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1842dee3-ed2a-4279-89de-32233b722e3f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a84b2aad5e9fc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago        Running             kube-proxy                2                   1b1f98e94bd65       kube-proxy-wjgk7
	0168fd8dc8aa7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   4b1244e49d9f0       storage-provisioner
	550f8ed8b5db3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago        Running             etcd                      2                   b28c4d21815dd       etcd-kubernetes-upgrade-193704
	7f7ce12087cb3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago        Running             kube-apiserver            2                   2944fe831b394       kube-apiserver-kubernetes-upgrade-193704
	c71b211d507df       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago        Running             kube-controller-manager   2                   7a508518b8f7c       kube-controller-manager-kubernetes-upgrade-193704
	76519a1b21988       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago        Running             kube-scheduler            2                   402655583e89a       kube-scheduler-kubernetes-upgrade-193704
	a4beece0541e4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago       Running             coredns                   1                   abc82fab3b2d4       coredns-7c65d6cfc9-582tn
	a2b0eebf7d3ae       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago       Running             coredns                   1                   4d97c4ea9a13a       coredns-7c65d6cfc9-6ljwf
	e3e51fb17fbaa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago       Exited              storage-provisioner       1                   4b1244e49d9f0       storage-provisioner
	27e23c1d6d9e0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 seconds ago       Exited              kube-scheduler            1                   402655583e89a       kube-scheduler-kubernetes-upgrade-193704
	71b9f16c67eb7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 seconds ago       Exited              kube-apiserver            1                   2944fe831b394       kube-apiserver-kubernetes-upgrade-193704
	6d7ff363f1aaf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 seconds ago       Exited              kube-controller-manager   1                   7a508518b8f7c       kube-controller-manager-kubernetes-upgrade-193704
	0a0b4d231c8d6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 seconds ago       Exited              etcd                      1                   b28c4d21815dd       etcd-kubernetes-upgrade-193704
	6314b1eb05a3d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 seconds ago       Exited              kube-proxy                1                   be1ca690cb18f       kube-proxy-wjgk7
	ac8b73d4ccc1e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   954336b2717ff       coredns-7c65d6cfc9-582tn
	e8083b2dba27b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   0b38ff86fda1a       coredns-7c65d6cfc9-6ljwf
	
	
	==> coredns [a2b0eebf7d3ae8bbfd24d9c85874258e7f11cadab7fb9364b6684e2417cdf809] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [a4beece0541e4f22b39fc9c3e6219a25a3fb0ae3a427083ed591192e33db75d0] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [ac8b73d4ccc1e95250af912c69505e1cb321e08f685f572560deb714b1ee7ac9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1666469239]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:45:32.432) (total time: 30004ms):
	Trace[1666469239]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (11:46:02.436)
	Trace[1666469239]: [30.004566063s] [30.004566063s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[427059945]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:45:32.436) (total time: 30000ms):
	Trace[427059945]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:46:02.436)
	Trace[427059945]: [30.00059907s] [30.00059907s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1218893244]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:45:32.431) (total time: 30005ms):
	Trace[1218893244]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (11:46:02.437)
	Trace[1218893244]: [30.005401854s] [30.005401854s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e8083b2dba27bd6e807553ab9e2da480c39752d4c57b03d521d7d53def430876] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1799090150]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:45:32.437) (total time: 30001ms):
	Trace[1799090150]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:46:02.438)
	Trace[1799090150]: [30.001730904s] [30.001730904s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1023330010]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:45:32.437) (total time: 30001ms):
	Trace[1023330010]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:46:02.438)
	Trace[1023330010]: [30.001743399s] [30.001743399s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[561084325]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:45:32.438) (total time: 30001ms):
	Trace[561084325]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:46:02.438)
	Trace[561084325]: [30.001198209s] [30.001198209s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-193704
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-193704
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:45:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-193704
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:46:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:46:52 +0000   Mon, 23 Sep 2024 11:45:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:46:52 +0000   Mon, 23 Sep 2024 11:45:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:46:52 +0000   Mon, 23 Sep 2024 11:45:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:46:52 +0000   Mon, 23 Sep 2024 11:45:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    kubernetes-upgrade-193704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 540e715a2ebc4d498d70bdcacb16845a
	  System UUID:                540e715a-2ebc-4d49-8d70-bdcacb16845a
	  Boot ID:                    aab6467f-6653-4b0c-9da3-1354ec716337
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-582tn                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 coredns-7c65d6cfc9-6ljwf                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-kubernetes-upgrade-193704                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         81s
	  kube-system                 kube-apiserver-kubernetes-upgrade-193704             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-193704    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-wjgk7                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-kubernetes-upgrade-193704             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 84s                kube-proxy       
	  Normal  NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    96s (x8 over 99s)  kubelet          Node kubernetes-upgrade-193704 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x7 over 99s)  kubelet          Node kubernetes-upgrade-193704 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  96s (x8 over 99s)  kubelet          Node kubernetes-upgrade-193704 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           86s                node-controller  Node kubernetes-upgrade-193704 event: Registered Node kubernetes-upgrade-193704 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-193704 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-193704 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-193704 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-193704 event: Registered Node kubernetes-upgrade-193704 in Controller
	
	
	==> dmesg <==
	[  +1.612995] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.846981] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.062338] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062833] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.211267] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.136940] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.288106] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +4.133554] systemd-fstab-generator[726]: Ignoring "noauto" option for root device
	[  +2.133453] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.064921] kauditd_printk_skb: 158 callbacks suppressed
	[ +12.575179] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.083783] kauditd_printk_skb: 69 callbacks suppressed
	[Sep23 11:46] kauditd_printk_skb: 107 callbacks suppressed
	[ +25.638591] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[  +0.171515] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[  +0.183657] systemd-fstab-generator[2222]: Ignoring "noauto" option for root device
	[  +0.248667] systemd-fstab-generator[2256]: Ignoring "noauto" option for root device
	[  +0.436079] systemd-fstab-generator[2363]: Ignoring "noauto" option for root device
	[  +1.280561] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +3.536131] kauditd_printk_skb: 270 callbacks suppressed
	[  +3.934660] systemd-fstab-generator[3671]: Ignoring "noauto" option for root device
	[  +4.659687] kauditd_printk_skb: 48 callbacks suppressed
	[  +1.135735] systemd-fstab-generator[4159]: Ignoring "noauto" option for root device
	
	
	==> etcd [0a0b4d231c8d6cecd69cfe7c8e7f8838ce1155e4d452daea5672b908fa6e6daa] <==
	{"level":"info","ts":"2024-09-23T11:46:43.148314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:46:43.148340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgPreVoteResp from 226361457cf4c252 at term 2"}
	{"level":"info","ts":"2024-09-23T11:46:43.148354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T11:46:43.148360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgVoteResp from 226361457cf4c252 at term 3"}
	{"level":"info","ts":"2024-09-23T11:46:43.148369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T11:46:43.148376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226361457cf4c252 elected leader 226361457cf4c252 at term 3"}
	{"level":"info","ts":"2024-09-23T11:46:43.151499Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"226361457cf4c252","local-member-attributes":"{Name:kubernetes-upgrade-193704 ClientURLs:[https://192.168.39.77:2379]}","request-path":"/0/members/226361457cf4c252/attributes","cluster-id":"b43d13dd46d94ad8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:46:43.151553Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:46:43.152068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:46:43.152802Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:46:43.156044Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.77:2379"}
	{"level":"info","ts":"2024-09-23T11:46:43.163598Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:46:43.169699Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:43.169768Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:43.174559Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:46:46.839832Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T11:46:46.839950Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-193704","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.77:2380"],"advertise-client-urls":["https://192.168.39.77:2379"]}
	{"level":"warn","ts":"2024-09-23T11:46:46.840075Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:46:46.840141Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:46:46.841686Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.77:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:46:46.841737Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.77:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T11:46:46.841823Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"226361457cf4c252","current-leader-member-id":"226361457cf4c252"}
	{"level":"info","ts":"2024-09-23T11:46:46.845324Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2024-09-23T11:46:46.845399Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2024-09-23T11:46:46.845413Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-193704","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.77:2380"],"advertise-client-urls":["https://192.168.39.77:2379"]}
	
	
	==> etcd [550f8ed8b5db3d548b2eb1bd6e176cdce6503a16b4ebb446ae1157d50292e947] <==
	{"level":"info","ts":"2024-09-23T11:46:49.242362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 switched to configuration voters=(2477931171060957778)"}
	{"level":"info","ts":"2024-09-23T11:46:49.243879Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b43d13dd46d94ad8","local-member-id":"226361457cf4c252","added-peer-id":"226361457cf4c252","added-peer-peer-urls":["https://192.168.39.77:2380"]}
	{"level":"info","ts":"2024-09-23T11:46:49.244058Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b43d13dd46d94ad8","local-member-id":"226361457cf4c252","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:46:49.244100Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:46:49.249054Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T11:46:49.250301Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"226361457cf4c252","initial-advertise-peer-urls":["https://192.168.39.77:2380"],"listen-peer-urls":["https://192.168.39.77:2380"],"advertise-client-urls":["https://192.168.39.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T11:46:49.250413Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T11:46:49.250574Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2024-09-23T11:46:49.250632Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2024-09-23T11:46:50.225071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-23T11:46:50.225158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-23T11:46:50.225186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgPreVoteResp from 226361457cf4c252 at term 3"}
	{"level":"info","ts":"2024-09-23T11:46:50.225204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became candidate at term 4"}
	{"level":"info","ts":"2024-09-23T11:46:50.225212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 received MsgVoteResp from 226361457cf4c252 at term 4"}
	{"level":"info","ts":"2024-09-23T11:46:50.225223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 became leader at term 4"}
	{"level":"info","ts":"2024-09-23T11:46:50.225243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226361457cf4c252 elected leader 226361457cf4c252 at term 4"}
	{"level":"info","ts":"2024-09-23T11:46:50.231506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:46:50.232393Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:46:50.237348Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:46:50.259608Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:46:50.260764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:46:50.261706Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.77:2379"}
	{"level":"info","ts":"2024-09-23T11:46:50.263117Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:50.265024Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:50.231461Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"226361457cf4c252","local-member-attributes":"{Name:kubernetes-upgrade-193704 ClientURLs:[https://192.168.39.77:2379]}","request-path":"/0/members/226361457cf4c252/attributes","cluster-id":"b43d13dd46d94ad8","publish-timeout":"7s"}
	
	
	==> kernel <==
	 11:46:56 up 2 min,  0 users,  load average: 1.99, 0.56, 0.20
	Linux kubernetes-upgrade-193704 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [71b9f16c67eb7c797de9511c5478364b6e68f04947a1a73333bca62c5cad5cc5] <==
	I0923 11:46:45.467243       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0923 11:46:45.467254       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0923 11:46:45.469590       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 11:46:45.469919       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0923 11:46:45.470056       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0923 11:46:45.470266       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0923 11:46:45.470362       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0923 11:46:45.470446       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0923 11:46:45.470545       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0923 11:46:45.467265       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0923 11:46:45.467271       1 controller.go:132] Ending legacy_token_tracking_controller
	I0923 11:46:45.470634       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0923 11:46:45.467278       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0923 11:46:45.467286       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0923 11:46:45.470903       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 11:46:45.471215       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0923 11:46:45.467234       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0923 11:46:45.480527       1 controller.go:157] Shutting down quota evaluator
	I0923 11:46:45.480971       1 controller.go:176] quota evaluator worker shutdown
	I0923 11:46:45.481092       1 controller.go:176] quota evaluator worker shutdown
	I0923 11:46:45.481099       1 controller.go:176] quota evaluator worker shutdown
	I0923 11:46:45.481107       1 controller.go:176] quota evaluator worker shutdown
	I0923 11:46:45.481114       1 controller.go:176] quota evaluator worker shutdown
	W0923 11:46:46.215647       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0923 11:46:46.217648       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [7f7ce12087cb3285d3fddc3c4a9d0a36526381cbcfe5e3c9f539489cb20043a1] <==
	I0923 11:46:52.051235       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 11:46:52.071914       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 11:46:52.072571       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 11:46:52.072655       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 11:46:52.072747       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:46:52.080918       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 11:46:52.081213       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 11:46:52.081528       1 aggregator.go:171] initial CRD sync complete...
	I0923 11:46:52.082425       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 11:46:52.082538       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 11:46:52.082567       1 cache.go:39] Caches are synced for autoregister controller
	I0923 11:46:52.083344       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 11:46:52.123769       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 11:46:52.126190       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:46:52.126224       1 policy_source.go:224] refreshing policies
	I0923 11:46:52.134651       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 11:46:52.154654       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:46:52.941263       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 11:46:53.630934       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 11:46:53.644581       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 11:46:53.691580       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 11:46:53.772133       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 11:46:53.778851       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 11:46:54.802128       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 11:46:55.662325       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6d7ff363f1aaf432506edbd0b1f87b3a9e7961340e68cdf1192cfdd03f68c42f] <==
	I0923 11:46:43.443868       1 serving.go:386] Generated self-signed cert in-memory
	I0923 11:46:43.859437       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0923 11:46:43.859475       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:46:43.862436       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 11:46:43.862546       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 11:46:43.862948       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0923 11:46:43.863057       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [c71b211d507df6b2abaf9e1a9a7a1886e027d47aa535bbb344e27e4f747d4988] <==
	I0923 11:46:55.273056       1 shared_informer.go:320] Caches are synced for attach detach
	I0923 11:46:55.277519       1 shared_informer.go:320] Caches are synced for expand
	I0923 11:46:55.280462       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 11:46:55.304053       1 shared_informer.go:320] Caches are synced for namespace
	I0923 11:46:55.307636       1 shared_informer.go:320] Caches are synced for persistent volume
	I0923 11:46:55.309091       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0923 11:46:55.310402       1 shared_informer.go:320] Caches are synced for ephemeral
	I0923 11:46:55.310509       1 shared_informer.go:320] Caches are synced for TTL
	I0923 11:46:55.310802       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0923 11:46:55.311153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="298.02µs"
	I0923 11:46:55.314283       1 shared_informer.go:320] Caches are synced for deployment
	I0923 11:46:55.320755       1 shared_informer.go:320] Caches are synced for HPA
	I0923 11:46:55.322970       1 shared_informer.go:320] Caches are synced for GC
	I0923 11:46:55.331487       1 shared_informer.go:320] Caches are synced for disruption
	I0923 11:46:55.433812       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:46:55.446262       1 shared_informer.go:320] Caches are synced for endpoint
	I0923 11:46:55.455474       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0923 11:46:55.456904       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0923 11:46:55.456974       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-193704"
	I0923 11:46:55.467437       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0923 11:46:55.510678       1 shared_informer.go:320] Caches are synced for crt configmap
	I0923 11:46:55.514364       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:46:55.880749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 11:46:55.910121       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 11:46:55.910147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6314b1eb05a3d661e30b618d1e42133742d5fb18eba4db3f3d6409335f3e67e0] <==
	
	
	==> kube-proxy [a84b2aad5e9fcc536d8c9f3a5966e62281225215e1920d31803818cc561d9eb3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:46:52.944304       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 11:46:52.963428       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.77"]
	E0923 11:46:52.963517       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:46:53.008211       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:46:53.008272       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:46:53.008296       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:46:53.014676       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:46:53.015149       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:46:53.015471       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:46:53.020106       1 config.go:199] "Starting service config controller"
	I0923 11:46:53.020215       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:46:53.021071       1 config.go:328] "Starting node config controller"
	I0923 11:46:53.021103       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:46:53.021448       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:46:53.021490       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:46:53.120565       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:46:53.121781       1 shared_informer.go:320] Caches are synced for node config
	I0923 11:46:53.121877       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [27e23c1d6d9e09383ff8f5a20b6ed4797062aa8d7cbb62f74ec64f6405a12937] <==
	I0923 11:46:44.060456       1 serving.go:386] Generated self-signed cert in-memory
	W0923 11:46:45.295885       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 11:46:45.295941       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 11:46:45.295955       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 11:46:45.295970       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 11:46:45.391865       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 11:46:45.393188       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0923 11:46:45.393320       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0923 11:46:45.395476       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0923 11:46:45.395674       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0923 11:46:45.395762       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [76519a1b21988f82259a8ade52a0ba4d68a772c6fdf7ea612731ce222f57daff] <==
	I0923 11:46:50.170598       1 serving.go:386] Generated self-signed cert in-memory
	W0923 11:46:52.006817       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 11:46:52.006933       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 11:46:52.007052       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 11:46:52.007058       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 11:46:52.088970       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 11:46:52.090182       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:46:52.093704       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 11:46:52.093802       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:46:52.093841       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 11:46:52.093821       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 11:46:52.194902       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.605863    3678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84bc57a1e712124554d9860f1a4d5c51-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-193704\" (UID: \"84bc57a1e712124554d9860f1a4d5c51\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-193704"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.605887    3678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84bc57a1e712124554d9860f1a4d5c51-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-193704\" (UID: \"84bc57a1e712124554d9860f1a4d5c51\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-193704"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.605909    3678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84bc57a1e712124554d9860f1a4d5c51-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-193704\" (UID: \"84bc57a1e712124554d9860f1a4d5c51\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-193704"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.605933    3678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be65b1c44b45088d83a7066260b0fa36-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-193704\" (UID: \"be65b1c44b45088d83a7066260b0fa36\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-193704"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.796225    3678 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-193704"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: E0923 11:46:48.797432    3678 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.77:8443: connect: connection refused" node="kubernetes-upgrade-193704"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.857814    3678 scope.go:117] "RemoveContainer" containerID="6d7ff363f1aaf432506edbd0b1f87b3a9e7961340e68cdf1192cfdd03f68c42f"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.858515    3678 scope.go:117] "RemoveContainer" containerID="27e23c1d6d9e09383ff8f5a20b6ed4797062aa8d7cbb62f74ec64f6405a12937"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.860137    3678 scope.go:117] "RemoveContainer" containerID="0a0b4d231c8d6cecd69cfe7c8e7f8838ce1155e4d452daea5672b908fa6e6daa"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:48.861777    3678 scope.go:117] "RemoveContainer" containerID="71b9f16c67eb7c797de9511c5478364b6e68f04947a1a73333bca62c5cad5cc5"
	Sep 23 11:46:48 kubernetes-upgrade-193704 kubelet[3678]: E0923 11:46:48.975715    3678 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-193704?timeout=10s\": dial tcp 192.168.39.77:8443: connect: connection refused" interval="800ms"
	Sep 23 11:46:49 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:49.198633    3678 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-193704"
	Sep 23 11:46:49 kubernetes-upgrade-193704 kubelet[3678]: E0923 11:46:49.199564    3678 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.77:8443: connect: connection refused" node="kubernetes-upgrade-193704"
	Sep 23 11:46:50 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:50.001482    3678 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-193704"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.208834    3678 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-193704"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.208954    3678 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-193704"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.209083    3678 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.211176    3678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.329566    3678 apiserver.go:52] "Watching apiserver"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.386103    3678 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.424389    3678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec7dc360-19b3-427a-a276-c33272a9319c-xtables-lock\") pod \"kube-proxy-wjgk7\" (UID: \"ec7dc360-19b3-427a-a276-c33272a9319c\") " pod="kube-system/kube-proxy-wjgk7"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.424506    3678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4c22af7c-2c17-4c27-a55a-579d87a80521-tmp\") pod \"storage-provisioner\" (UID: \"4c22af7c-2c17-4c27-a55a-579d87a80521\") " pod="kube-system/storage-provisioner"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.424538    3678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec7dc360-19b3-427a-a276-c33272a9319c-lib-modules\") pod \"kube-proxy-wjgk7\" (UID: \"ec7dc360-19b3-427a-a276-c33272a9319c\") " pod="kube-system/kube-proxy-wjgk7"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.638226    3678 scope.go:117] "RemoveContainer" containerID="e3e51fb17fbaaa7facd8f3bfe54235dca4b35c6785f0f84a4f40a3700c7a8577"
	Sep 23 11:46:52 kubernetes-upgrade-193704 kubelet[3678]: I0923 11:46:52.638647    3678 scope.go:117] "RemoveContainer" containerID="6314b1eb05a3d661e30b618d1e42133742d5fb18eba4db3f3d6409335f3e67e0"
	
	
	==> storage-provisioner [0168fd8dc8aa7516df2f2969502e099e5861a9a8cb34be66c05df2c9d3d86910] <==
	I0923 11:46:52.798296       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:46:52.822820       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:46:52.823242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e3e51fb17fbaaa7facd8f3bfe54235dca4b35c6785f0f84a4f40a3700c7a8577] <==
	I0923 11:46:42.657240       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:46:55.354227   57959 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19689-3961/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-193704 -n kubernetes-upgrade-193704
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-193704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-193704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-193704
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-193704: (1.121713946s)
--- FAIL: TestKubernetesUpgrade (427.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (7200.053s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-880167 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 12:20:57.439801   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:21:02.464648   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/custom-flannel-283725/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
	running tests:
		TestStartStop (39m19s)
		TestStartStop/group/default-k8s-diff-port (29m33s)
		TestStartStop/group/default-k8s-diff-port/serial (29m33s)
		TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6m32s)
		TestStartStop/group/newest-cni (1m5s)
		TestStartStop/group/newest-cni/serial (1m5s)
		TestStartStop/group/newest-cni/serial/SecondStart (7s)

                                                
                                                
goroutine 9288 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0000fc4e0, 0xc0007ddbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000012210, {0x4585140, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x4641680?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00012bae0)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00012bae0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00066ef80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 164 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3469 [chan receive, 1 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0013d96c0, 0x2f08530)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 3279
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4861 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4886
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4328 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001952700, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4323
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 111 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 151
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 163 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc0007f1f50, 0xc0007f1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0x20?, 0xc0007f1f50, 0xc0007f1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc0001fe600?, 0xc000064620?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 112
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 162 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000a8c110, 0x2d)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013aad80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a8c140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00006a020, {0x3204700, 0xc0008fc090}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00006a020, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 112
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 112 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a8c140, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 151
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1387 [chan receive, 101 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009cccc0, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1332
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 9231 [syscall]:
syscall.Syscall6(0xf7, 0x3, 0x11, 0xc001570b30, 0x4, 0xc001f502d0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc000491308?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0013d5680)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0013d5680)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0006bc340, 0xc0013d5680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3228a38, 0xc000476540}, 0xc0006bc340, {0xc0005ba900, 0x11}, {0x0?, 0xc0015b3f60?}, {0x559033?, 0x4b162f?}, {0xc0019aa100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0006bc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0006bc340, 0xc001e08580)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 9076
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3921 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001c07000, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3919
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4094 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4093
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4092 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000a8dbd0, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0007ecd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a8dc00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00197d080, {0x3204700, 0xc001c083c0}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00197d080, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4057
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5675 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009cd9c0, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5673
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4609 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001eb8410, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00149bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001eb8440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e321c0, {0x3204700, 0xc0016c0150}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e321c0, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4685
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1391 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc0017dd750, 0xc0007eef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0x0?, 0xc0017dd750, 0xc0017dd798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0xc000859500?, 0xc00020f6b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0017dd7d0?, 0x593ba4?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1387
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 5674 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5673
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 7153 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x32289c8, 0xc00054ae10}, {0x321c9d0, 0xc000542ca0}, 0x1, 0x0, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3228a38?, 0xc000493420?}, 0x3b9aca00, 0xc001393d38?, 0x1, 0xc001393b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3228a38, 0xc000493420}, 0xc0012cb1e0, {0xc001e4f5a0, 0x1c}, {0x25ad332, 0x14}, {0x25c0743, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x3228a38, 0xc000493420}, 0xc0012cb1e0, {0xc001e4f5a0, 0x1c}, {0x25afa70?, 0xc0017d8f60?}, {0x559033?, 0x4b162f?}, {0xc000621300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x125
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0012cb1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0012cb1e0, 0xc001a88180)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4822
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1656 [select, 99 minutes]:
net/http.(*persistConn).writeLoop(0xc0021fbd40)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1653
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 3984 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc001f5b750, 0xc001647f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0x30?, 0xc001f5b750, 0xc001f5b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0xc0013d9520?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc001328480?, 0xc0015d3030?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3921
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4822 [chan receive, 6 minutes]:
testing.(*T).Run(0xc001d829c0, {0x25ad396?, 0xc000bf0570?}, 0xc001a88180)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001d829c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001d829c0, 0xc00181ee80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3472
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4707 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4706
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3975 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3974
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4839 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001952ed0, 0x5)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001dcdd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001952f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a7e2a0, {0x3204700, 0xc0017620f0}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a7e2a0, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4783
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3985 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3984
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1400 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc000859980, 0xc001440380)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1399
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4436 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc0017da750, 0xc001517f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0x0?, 0xc0017da750, 0xc0017da798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0x9e92b6?, 0xc001329380?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0017da7d0?, 0x593ba4?, 0xc0019537c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4450
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1390 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009ccc90, 0x29)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013c3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009cccc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007917a0, {0x3204700, 0xc000793470}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007917a0, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1387
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1386 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1332
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3471 [chan receive, 1 minutes]:
testing.(*T).Run(0xc0013d9ba0, {0x258e05c?, 0x0?}, 0xc000912700)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013d9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013d9ba0, 0xc0009ccd80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3469
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1441 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0xc001782a80, 0xc00012fc70)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1323
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 1392 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1391
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4093 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc001f59750, 0xc0007f0f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0x5c?, 0xc001f59750, 0xc001f59798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0xc00154bbd0?, 0xc0016d6d90?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001f597d0?, 0x593ba4?, 0xc0013d5680?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4057
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1655 [select, 99 minutes]:
net/http.(*persistConn).readLoop(0xc0021fbd40)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1653
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 9232 [IO wait]:
internal/poll.runtime_pollWait(0x7f897781ac88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001acb500?, 0xc00159ea2a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001acb500, {0xc00159ea2a, 0x5d6, 0x5d6})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bd23f8, {0xc00159ea2a?, 0x411b30?, 0x22a?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0018d4d50, {0x3203040, 0xc0005403f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x32031c0, 0xc0018d4d50}, {0x3203040, 0xc0005403f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000bd23f8?, {0x32031c0, 0xc0018d4d50})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000bd23f8, {0x32031c0, 0xc0018d4d50})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x32031c0, 0xc0018d4d50}, {0x32030c0, 0xc000bd23f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001bc31f0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 9231
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 4840 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc000beff50, 0xc001514f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0x5c?, 0xc000beff50, 0xc000beff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0xc0015613b0?, 0xc000569e30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000beffd0?, 0x9e6805?, 0xc000209c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4783
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1185 [IO wait, 103 minutes]:
internal/poll.runtime_pollWait(0x7f897781b5d0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0015c2080?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0015c2080)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0015c2080)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0009cc440)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0009cc440)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc001361e00, {0x321c370, 0xc0009cc440})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc001361e00)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0013d8000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1182
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 3472 [chan receive, 30 minutes]:
testing.(*T).Run(0xc0013d9d40, {0x258e05c?, 0x0?}, 0xc00181ee80)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013d9d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013d9d40, 0xc0009ccdc0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3469
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4317 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc0017dbf50, 0xc001513f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0xd0?, 0xc0017dbf50, 0xc0017dbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0x9e92b6?, 0xc001b84900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc001b84a80?, 0xc0014419d0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4328
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 9250 [select]:
golang.org/x/net/http2.(*ClientConn).Ping(0xc000858180, {0x3228a38, 0xc000450540})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:3061 +0x2c5
golang.org/x/net/http2.(*ClientConn).healthCheck(0xc000858180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:876 +0xb1
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 4450 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00084f000, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4352
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4316 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0019526d0, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00164bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001952700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00078c550, {0x3204700, 0xc0017e8000}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00078c550, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4328
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4783 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001952f00, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4835
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 5091 [IO wait]:
internal/poll.runtime_pollWait(0x7f897781ad90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001a89a00?, 0xc00159e000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001a89a00, {0xc00159e000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001a89a00, {0xc00159e000?, 0x10?, 0xc0015188a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0012dc2a8, {0xc00159e000?, 0xc00159e05f?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc00197ea98, {0xc00159e000?, 0x0?, 0xc00197ea98?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0020f2638, {0x3204d40, 0xc00197ea98})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0020f2388, {0x7f89744f7618, 0xc00245e480}, 0xc001518a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0020f2388, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0020f2388, {0xc001418000, 0x1000, 0xc001631180?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0019d1500, {0xc000820820, 0x9, 0x4555740?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3203260, 0xc0019d1500}, {0xc000820820, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000820820, 0x9, 0x47b965?}, {0x3203260?, 0xc0019d1500?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0008207e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001518fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000858f00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 5090
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1542 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bf2480, 0xc001bc2e70)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1541
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4437 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4436
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5646 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc00149df50, 0xc00149df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0x0?, 0xc00149df50, 0xc00149df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0x9e92b6?, 0xc000858780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000858780?, 0x593ba4?, 0xc0000656c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5675
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3279 [chan receive, 41 minutes]:
testing.(*T).Run(0xc0012ca820, {0x258cd17?, 0x559033?}, 0x2f08530)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0012ca820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0012ca820, 0x2f08338)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5174 [IO wait]:
internal/poll.runtime_pollWait(0x7f897781b4c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000885900?, 0xc0014f8000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000885900, {0xc0014f8000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000885900, {0xc0014f8000?, 0x9d68b2?, 0xc00156c9a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000bd26a0, {0xc0014f8000?, 0xc001c001e0?, 0xc0014f805f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc000491620, {0xc0014f8000?, 0x0?, 0xc000491620?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0006ff438, {0x3204d40, 0xc000491620})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0006ff188, {0x3204220, 0xc000bd26a0}, 0xc00156ca10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0006ff188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0006ff188, {0xc00160b000, 0x1000, 0xc001540e00?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001e21620, {0xc001956200, 0x9, 0x4555740?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3203260, 0xc001e21620}, {0xc001956200, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001956200, 0x9, 0x47b965?}, {0x3203260?, 0xc001e21620?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0019561c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00156cfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001329680)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 5173
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 9285 [select]:
golang.org/x/net/http2.(*ClientConn).Ping(0xc001329680, {0x3228a38, 0xc00044d730})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:3061 +0x2c5
golang.org/x/net/http2.(*ClientConn).healthCheck(0xc001329680)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:876 +0xb1
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 4862 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a8c940, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4886
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4482 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4449
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4056 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4088
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4901 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4900
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4469 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4468
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 9076 [chan receive]:
testing.(*T).Run(0xc0012cb860, {0x25987af?, 0x0?}, 0xc001e08580)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0012cb860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0012cb860, 0xc000912700)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3471
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3983 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001c06fd0, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000acd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001c07000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00197c4a0, {0x3204700, 0xc0018fc720}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00197c4a0, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3921
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4782 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4835
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4353 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4352
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4318 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4317
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4706 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc0017def50, 0xc001512f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0x7?, 0xc0017def50, 0xc0017def98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0xc0006bd040?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0017defd0?, 0x593ba4?, 0xc00179a1b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4685
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4841 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4840
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4057 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a8dc00, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4088
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4435 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00084efd0, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014a0d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00084f000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00078dc30, {0x3204700, 0xc0017630b0}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00078dc30, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4450
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4468 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc00164df50, 0xc00164df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0xc0?, 0xc00164df50, 0xc00164df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc0013d4c00?, 0xc00012fdc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4483
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3974 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0xb0?, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0xc0013d9520?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000967d0?, 0x593ba4?, 0xc00012f3b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3988
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3988 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0020c8600, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3855
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3973 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0020c85d0, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0007f3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0020c8600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000791760, {0x3204700, 0xc0017e8540}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000791760, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3988
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4327 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4323
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5645 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0009cd990, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0007dbd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009cd9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009fa0f0, {0x3204700, 0xc00179aa20}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009fa0f0, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5675
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4467 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0020c9210, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000abd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0020c9240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f7def0, {0x3204700, 0xc0018bb6e0}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f7def0, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4483
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4685 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001eb8440, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4692
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3920 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3919
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4483 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0020c9240, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4449
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3987 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3855
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4899 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000a8c910, 0x5)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001f33d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a8c940)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021feb50, {0x3204700, 0xc0018d4900}, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021feb50, 0x3b9aca00, 0x0, 0x1, 0xc00012e2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4684 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4692
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4900 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc00012e2a0}, 0xc0015aff50, 0xc0015aff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc00012e2a0}, 0xe0?, 0xc0015aff50, 0xc0015aff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc00012e2a0?}, 0xc001d82340?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015affd0?, 0x593ba4?, 0xc0015affa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 5647 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5646
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4992 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7f897781b6d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001ce1280?, 0xc00085e000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001ce1280, {0xc00085e000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001ce1280, {0xc00085e000?, 0x10?, 0xc0014a18a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000bd22a0, {0xc00085e000?, 0xc00085e05f?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc002016d50, {0xc00085e000?, 0x0?, 0xc002016d50?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0020f22b8, {0x3204d40, 0xc002016d50})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0020f2008, {0x7f89744f7618, 0xc001684768}, 0xc0014a1a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0020f2008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0020f2008, {0xc000786000, 0x1000, 0xc00240c8c0?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0015cb680, {0xc0013484a0, 0x9, 0x4555740?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3203260, 0xc0015cb680}, {0xc0013484a0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0013484a0, 0x9, 0x47b965?}, {0x3203260?, 0xc0015cb680?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001348460)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0014a1fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000858180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 4991
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 9266 [select]:
os/exec.(*Cmd).watchCtx(0xc0013d5680, 0xc001bc3e30)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 9231
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 9233 [IO wait]:
internal/poll.runtime_pollWait(0x7f897781ab80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001acb5c0?, 0xc0013f67dd?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001acb5c0, {0xc0013f67dd, 0x3823, 0x3823})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bd2438, {0xc0013f67dd?, 0x5?, 0x3eaf?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0018d4d80, {0x3203040, 0xc0012dc508})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x32031c0, 0xc0018d4d80}, {0x3203040, 0xc0012dc508}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000bd2438?, {0x32031c0, 0xc0018d4d80})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000bd2438, {0x32031c0, 0xc0018d4d80})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x32031c0, 0xc0018d4d80}, {0x32030c0, 0xc000bd2438}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001e08580?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 9231
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                    

Test pass (225/275)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 35.56
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.25
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 13.27
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 114.02
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 140.58
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 12.21
38 TestAddons/parallel/CSI 68.27
39 TestAddons/parallel/Headlamp 19.94
40 TestAddons/parallel/CloudSpanner 6.77
41 TestAddons/parallel/LocalPath 56.26
42 TestAddons/parallel/NvidiaDevicePlugin 6.76
43 TestAddons/parallel/Yakd 12.02
44 TestAddons/StoppedEnableDisable 7.55
45 TestCertOptions 95.11
46 TestCertExpiration 329.33
48 TestForceSystemdFlag 54.1
49 TestForceSystemdEnv 64.35
51 TestKVMDriverInstallOrUpdate 5.08
55 TestErrorSpam/setup 42.98
56 TestErrorSpam/start 0.33
57 TestErrorSpam/status 0.74
58 TestErrorSpam/pause 1.58
59 TestErrorSpam/unpause 1.74
60 TestErrorSpam/stop 5.01
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 55.35
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 40.89
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.15
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
72 TestFunctional/serial/CacheCmd/cache/add_local 2.25
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
80 TestFunctional/serial/ExtraConfig 34.04
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.36
83 TestFunctional/serial/LogsFileCmd 1.45
84 TestFunctional/serial/InvalidService 4.32
86 TestFunctional/parallel/ConfigCmd 0.33
87 TestFunctional/parallel/DashboardCmd 20.36
88 TestFunctional/parallel/DryRun 0.3
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1.17
94 TestFunctional/parallel/ServiceCmdConnect 7.55
95 TestFunctional/parallel/AddonsCmd 0.11
96 TestFunctional/parallel/PersistentVolumeClaim 41.19
98 TestFunctional/parallel/SSHCmd 0.44
99 TestFunctional/parallel/CpCmd 1.37
101 TestFunctional/parallel/FileSync 0.29
102 TestFunctional/parallel/CertSync 1.49
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
110 TestFunctional/parallel/License 0.68
111 TestFunctional/parallel/ServiceCmd/DeployApp 12.22
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
113 TestFunctional/parallel/MountCmd/any-port 11.5
114 TestFunctional/parallel/ProfileCmd/profile_list 0.41
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
116 TestFunctional/parallel/MountCmd/specific-port 1.93
117 TestFunctional/parallel/ServiceCmd/List 0.47
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
120 TestFunctional/parallel/ServiceCmd/Format 0.38
121 TestFunctional/parallel/MountCmd/VerifyCleanup 1.65
122 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/Version/short 0.05
133 TestFunctional/parallel/Version/components 0.91
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
138 TestFunctional/parallel/ImageCommands/ImageBuild 4.36
139 TestFunctional/parallel/ImageCommands/Setup 1.98
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.9
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.19
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.93
147 TestFunctional/parallel/ImageCommands/ImageRemove 2.45
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.5
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
150 TestFunctional/delete_echo-server_images 0.03
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 200.68
157 TestMultiControlPlane/serial/DeployApp 7.89
158 TestMultiControlPlane/serial/PingHostFromPods 1.18
159 TestMultiControlPlane/serial/AddWorkerNode 59.44
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
162 TestMultiControlPlane/serial/CopyFile 12.53
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.02
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.61
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
171 TestMultiControlPlane/serial/RestartCluster 320.21
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
173 TestMultiControlPlane/serial/AddSecondaryNode 79.44
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
178 TestJSONOutput/start/Command 88.78
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.72
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.62
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.35
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.18
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 89.7
210 TestMountStart/serial/StartWithMountFirst 26.28
211 TestMountStart/serial/VerifyMountFirst 0.36
212 TestMountStart/serial/StartWithMountSecond 25.44
213 TestMountStart/serial/VerifyMountSecond 0.36
214 TestMountStart/serial/DeleteFirst 0.69
215 TestMountStart/serial/VerifyMountPostDelete 0.36
216 TestMountStart/serial/Stop 1.28
217 TestMountStart/serial/RestartStopped 23.27
218 TestMountStart/serial/VerifyMountPostStop 0.37
221 TestMultiNode/serial/FreshStart2Nodes 110.63
222 TestMultiNode/serial/DeployApp2Nodes 5.78
223 TestMultiNode/serial/PingHostFrom2Pods 0.78
224 TestMultiNode/serial/AddNode 55.54
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.58
227 TestMultiNode/serial/CopyFile 7.07
228 TestMultiNode/serial/StopNode 2.35
229 TestMultiNode/serial/StartAfterStop 40.4
231 TestMultiNode/serial/DeleteNode 2.31
233 TestMultiNode/serial/RestartMultiNode 185.28
234 TestMultiNode/serial/ValidateNameConflict 43.86
241 TestScheduledStopUnix 115.1
245 TestRunningBinaryUpgrade 218.58
249 TestStoppedBinaryUpgrade/Setup 2.61
253 TestStoppedBinaryUpgrade/Upgrade 144.72
258 TestNetworkPlugins/group/false 2.9
270 TestPause/serial/Start 65.05
271 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
274 TestNoKubernetes/serial/StartWithK8s 46.09
275 TestPause/serial/SecondStartNoReconfiguration 36.02
276 TestNoKubernetes/serial/StartWithStopK8s 6.01
277 TestNoKubernetes/serial/Start 28.95
278 TestPause/serial/Pause 0.77
279 TestPause/serial/VerifyStatus 0.24
280 TestPause/serial/Unpause 0.65
281 TestPause/serial/PauseAgain 0.87
282 TestPause/serial/DeletePaused 0.97
283 TestPause/serial/VerifyDeletedResources 0.6
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
285 TestNoKubernetes/serial/ProfileList 1.07
286 TestNoKubernetes/serial/Stop 1.34
287 TestNoKubernetes/serial/StartNoArgs 67.35
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
289 TestNetworkPlugins/group/auto/Start 158.72
290 TestNetworkPlugins/group/flannel/Start 88.05
291 TestNetworkPlugins/group/enable-default-cni/Start 59.94
292 TestNetworkPlugins/group/auto/KubeletFlags 0.28
293 TestNetworkPlugins/group/auto/NetCatPod 14.26
294 TestNetworkPlugins/group/auto/DNS 0.16
295 TestNetworkPlugins/group/auto/Localhost 0.13
296 TestNetworkPlugins/group/auto/HairPin 0.18
297 TestNetworkPlugins/group/flannel/ControllerPod 6.01
298 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
299 TestNetworkPlugins/group/flannel/NetCatPod 11.24
300 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
301 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.27
302 TestNetworkPlugins/group/bridge/Start 56.5
303 TestNetworkPlugins/group/flannel/DNS 0.22
304 TestNetworkPlugins/group/flannel/Localhost 0.13
305 TestNetworkPlugins/group/flannel/HairPin 0.15
306 TestNetworkPlugins/group/enable-default-cni/DNS 16.1
307 TestNetworkPlugins/group/calico/Start 85.32
308 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
309 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
310 TestNetworkPlugins/group/kindnet/Start 75.5
311 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
312 TestNetworkPlugins/group/bridge/NetCatPod 12.25
313 TestNetworkPlugins/group/bridge/DNS 21.58
314 TestNetworkPlugins/group/bridge/Localhost 0.16
315 TestNetworkPlugins/group/bridge/HairPin 0.17
316 TestNetworkPlugins/group/custom-flannel/Start 73.49
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
320 TestNetworkPlugins/group/calico/KubeletFlags 0.21
321 TestNetworkPlugins/group/calico/NetCatPod 11.23
322 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
323 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
324 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
325 TestNetworkPlugins/group/calico/DNS 0.2
326 TestNetworkPlugins/group/calico/Localhost 0.14
327 TestNetworkPlugins/group/calico/HairPin 0.12
328 TestNetworkPlugins/group/kindnet/DNS 0.23
329 TestNetworkPlugins/group/kindnet/Localhost 0.21
330 TestNetworkPlugins/group/kindnet/HairPin 0.24
335 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
336 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.22
337 TestNetworkPlugins/group/custom-flannel/DNS 0.17
338 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
339 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (35.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-944972 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-944972 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (35.564566102s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (35.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 10:21:39.527916   11139 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0923 10:21:39.528010   11139 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-944972
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-944972: exit status 85 (249.147631ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-944972 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |          |
	|         | -p download-only-944972        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:03.999557   11151 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:03.999755   11151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:03.999763   11151 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:03.999767   11151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:03.999936   11151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	W0923 10:21:04.000058   11151 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19689-3961/.minikube/config/config.json: open /home/jenkins/minikube-integration/19689-3961/.minikube/config/config.json: no such file or directory
	I0923 10:21:04.000595   11151 out.go:352] Setting JSON to true
	I0923 10:21:04.001496   11151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":207,"bootTime":1727086657,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:04.001594   11151 start.go:139] virtualization: kvm guest
	I0923 10:21:04.004359   11151 out.go:97] [download-only-944972] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:21:04.004474   11151 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:21:04.004528   11151 notify.go:220] Checking for updates...
	I0923 10:21:04.005901   11151 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:21:04.007359   11151 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:04.008694   11151 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:21:04.009789   11151 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:04.010947   11151 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 10:21:04.013037   11151 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:21:04.013245   11151 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:04.110455   11151 out.go:97] Using the kvm2 driver based on user configuration
	I0923 10:21:04.110484   11151 start.go:297] selected driver: kvm2
	I0923 10:21:04.110491   11151 start.go:901] validating driver "kvm2" against <nil>
	I0923 10:21:04.110835   11151 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:04.110964   11151 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 10:21:04.125576   11151 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 10:21:04.125633   11151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:04.126131   11151 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 10:21:04.126273   11151 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:21:04.126298   11151 cni.go:84] Creating CNI manager for ""
	I0923 10:21:04.126346   11151 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:21:04.126354   11151 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:04.126400   11151 start.go:340] cluster config:
	{Name:download-only-944972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-944972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:04.126558   11151 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:04.128340   11151 out.go:97] Downloading VM boot image ...
	I0923 10:21:04.128372   11151 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 10:21:24.345524   11151 out.go:97] Starting "download-only-944972" primary control-plane node in "download-only-944972" cluster
	I0923 10:21:24.345550   11151 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 10:21:24.454836   11151 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:24.454866   11151 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:24.455022   11151 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 10:21:24.456672   11151 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 10:21:24.456687   11151 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0923 10:21:24.576097   11151 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:37.695275   11151 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0923 10:21:37.695365   11151 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0923 10:21:38.599514   11151 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0923 10:21:38.599825   11151 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/download-only-944972/config.json ...
	I0923 10:21:38.599852   11151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/download-only-944972/config.json: {Name:mk51359b2c690bfec68705c85f147a0968514c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:38.599998   11151 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 10:21:38.600176   11151 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-944972 host does not exist
	  To start a cluster, run: "minikube start -p download-only-944972"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-944972
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (13.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-056027 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-056027 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.267038747s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (13.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 10:21:53.292441   11139 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0923 10:21:53.292483   11139 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-056027
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-056027: exit status 85 (59.378185ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-944972 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-944972        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-944972        | download-only-944972 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-056027 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-056027        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:40.061474   11458 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:40.061590   11458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:40.061599   11458 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:40.061604   11458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:40.061778   11458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:21:40.062328   11458 out.go:352] Setting JSON to true
	I0923 10:21:40.063205   11458 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":243,"bootTime":1727086657,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:40.063308   11458 start.go:139] virtualization: kvm guest
	I0923 10:21:40.065407   11458 out.go:97] [download-only-056027] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:21:40.065546   11458 notify.go:220] Checking for updates...
	I0923 10:21:40.066777   11458 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:21:40.068025   11458 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:40.069289   11458 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:21:40.070468   11458 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:21:40.071777   11458 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 10:21:40.073993   11458 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:21:40.074195   11458 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:40.105350   11458 out.go:97] Using the kvm2 driver based on user configuration
	I0923 10:21:40.105406   11458 start.go:297] selected driver: kvm2
	I0923 10:21:40.105414   11458 start.go:901] validating driver "kvm2" against <nil>
	I0923 10:21:40.105821   11458 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:40.105923   11458 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19689-3961/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 10:21:40.121337   11458 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 10:21:40.121400   11458 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:40.122146   11458 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 10:21:40.122341   11458 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:21:40.122374   11458 cni.go:84] Creating CNI manager for ""
	I0923 10:21:40.122441   11458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 10:21:40.122451   11458 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:40.122520   11458 start.go:340] cluster config:
	{Name:download-only-056027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-056027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:40.122646   11458 iso.go:125] acquiring lock: {Name:mk5910fd217a49ac1675eb6468ac5e43bf468777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:40.124364   11458 out.go:97] Starting "download-only-056027" primary control-plane node in "download-only-056027" cluster
	I0923 10:21:40.124383   11458 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:40.717565   11458 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:40.717628   11458 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:40.717825   11458 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:40.719640   11458 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 10:21:40.719660   11458 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0923 10:21:40.840470   11458 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19689-3961/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-056027 host does not exist
	  To start a cluster, run: "minikube start -p download-only-056027"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-056027
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 10:21:53.844483   11139 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-004546 --alsologtostderr --binary-mirror http://127.0.0.1:34819 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-004546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-004546
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (114.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-147533 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-147533 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m52.968729703s)
helpers_test.go:175: Cleaning up "offline-crio-147533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-147533
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-147533: (1.055139112s)
--- PASS: TestOffline (114.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-230451
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-230451: exit status 85 (46.648767ms)

                                                
                                                
-- stdout --
	* Profile "addons-230451" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-230451"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-230451
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-230451: exit status 85 (46.200774ms)

                                                
                                                
-- stdout --
	* Profile "addons-230451" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-230451"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (140.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-230451 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-230451 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m20.576368458s)
--- PASS: TestAddons/Setup (140.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-230451 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-230451 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b2v2k" [b41306b0-40aa-4b7e-b9f3-931550e87f01] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004920479s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-230451
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-230451: (6.200161403s)
--- PASS: TestAddons/parallel/InspektorGadget (12.21s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 10:32:38.016742   11139 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 10:32:38.025478   11139 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:32:38.025511   11139 kapi.go:107] duration metric: took 8.787937ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 8.799066ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-230451 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-230451 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bd43da3e-c3a6-4889-933f-e3b234584151] Pending
helpers_test.go:344: "task-pv-pod" [bd43da3e-c3a6-4889-933f-e3b234584151] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bd43da3e-c3a6-4889-933f-e3b234584151] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004692431s
addons_test.go:528: (dbg) Run:  kubectl --context addons-230451 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-230451 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-230451 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-230451 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-230451 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-230451 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-230451 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [473a5fcc-1118-4412-8a07-a361ede815d2] Pending
helpers_test.go:344: "task-pv-pod-restore" [473a5fcc-1118-4412-8a07-a361ede815d2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [473a5fcc-1118-4412-8a07-a361ede815d2] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.060432472s
addons_test.go:570: (dbg) Run:  kubectl --context addons-230451 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-230451 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-230451 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.722107851s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.27s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-230451 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-v88qm" [225f3a75-22bb-46fc-8524-8a8eb61ef50f] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-v88qm" [225f3a75-22bb-46fc-8524-8a8eb61ef50f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-v88qm" [225f3a75-22bb-46fc-8524-8a8eb61ef50f] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004066132s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 addons disable headlamp --alsologtostderr -v=1: (6.049318717s)
--- PASS: TestAddons/parallel/Headlamp (19.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-r6tsf" [53ab60ce-cc9d-4cfc-8ea7-0377211c4549] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.010725559s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-230451
--- PASS: TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-230451 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-230451 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1be9563a-0099-4395-b271-6c07300521e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1be9563a-0099-4395-b271-6c07300521e9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1be9563a-0099-4395-b271-6c07300521e9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005100216s
addons_test.go:938: (dbg) Run:  kubectl --context addons-230451 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 ssh "cat /opt/local-path-provisioner/pvc-7588405d-d8e1-47cb-b3c2-c66ec9b2a455_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-230451 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-230451 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.482801142s)
--- PASS: TestAddons/parallel/LocalPath (56.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t2lzg" [6608f635-89c8-4811-9dca-ae138dbe1bd9] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003932629s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-230451
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-75ttv" [95dfca75-0de8-4805-9bb8-381a6efe04dc] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00381898s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-230451 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-230451 addons disable yakd --alsologtostderr -v=1: (6.01450047s)
--- PASS: TestAddons/parallel/Yakd (12.02s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-230451
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-230451: (7.285120555s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-230451
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-230451
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-230451
--- PASS: TestAddons/StoppedEnableDisable (7.55s)

                                                
                                    
x
+
TestCertOptions (95.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-796310 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-796310 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m33.649401994s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-796310 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-796310 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-796310 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-796310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-796310
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-796310: (1.01711553s)
--- PASS: TestCertOptions (95.11s)

                                                
                                    
x
+
TestCertExpiration (329.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-516973 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-516973 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m29.254279248s)
E0923 11:45:57.440429   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-516973 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-516973 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (59.258674069s)
helpers_test.go:175: Cleaning up "cert-expiration-516973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-516973
--- PASS: TestCertExpiration (329.33s)

                                                
                                    
x
+
TestForceSystemdFlag (54.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-936120 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-936120 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.748622163s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-936120 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-936120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-936120
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-936120: (1.158499626s)
--- PASS: TestForceSystemdFlag (54.10s)

                                                
                                    
x
+
TestForceSystemdEnv (64.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-694064 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-694064 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m3.571457235s)
helpers_test.go:175: Cleaning up "force-systemd-env-694064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-694064
--- PASS: TestForceSystemdEnv (64.35s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.08s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0923 11:43:38.872108   11139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 11:43:38.872257   11139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0923 11:43:38.898597   11139 install.go:62] docker-machine-driver-kvm2: exit status 1
W0923 11:43:38.898947   11139 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 11:43:38.899002   11139 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2262457540/001/docker-machine-driver-kvm2
I0923 11:43:39.129010   11139 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2262457540/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc0003ad0a0 gz:0xc0003ad0a8 tar:0xc0003ad050 tar.bz2:0xc0003ad060 tar.gz:0xc0003ad070 tar.xz:0xc0003ad080 tar.zst:0xc0003ad090 tbz2:0xc0003ad060 tgz:0xc0003ad070 txz:0xc0003ad080 tzst:0xc0003ad090 xz:0xc0003ad100 zip:0xc0003ad930 zst:0xc0003ad108] Getters:map[file:0xc0008d3f10 http:0xc0006a6640 https:0xc0006a6690] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 11:43:39.129053   11139 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2262457540/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.08s)

                                                
                                    
x
+
TestErrorSpam/setup (42.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-949873 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-949873 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-949873 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-949873 --driver=kvm2  --container-runtime=crio: (42.978336532s)
--- PASS: TestErrorSpam/setup (42.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (5.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 stop: (1.613831973s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 stop: (1.946978632s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-949873 --log_dir /tmp/nospam-949873 stop: (1.446675195s)
--- PASS: TestErrorSpam/stop (5.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19689-3961/.minikube/files/etc/test/nested/copy/11139/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870347 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0923 10:39:15.431718   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:15.438090   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:15.449480   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:15.470983   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:15.512426   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:15.593889   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:15.755447   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:16.077119   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:16.719198   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:18.000711   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:20.563670   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:25.685320   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-870347 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.349720164s)
--- PASS: TestFunctional/serial/StartWithProxy (55.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 10:39:26.986747   11139 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870347 --alsologtostderr -v=8
E0923 10:39:35.927044   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:39:56.408932   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-870347 --alsologtostderr -v=8: (40.88512486s)
functional_test.go:663: soft start took 40.885817277s for "functional-870347" cluster.
I0923 10:40:07.872187   11139 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (40.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-870347 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 cache add registry.k8s.io/pause:3.1: (1.075987455s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 cache add registry.k8s.io/pause:3.3: (1.237244676s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 cache add registry.k8s.io/pause:latest: (1.130981207s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-870347 /tmp/TestFunctionalserialCacheCmdcacheadd_local3688327088/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cache add minikube-local-cache-test:functional-870347
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 cache add minikube-local-cache-test:functional-870347: (1.933333198s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cache delete minikube-local-cache-test:functional-870347
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-870347
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.360191ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 kubectl -- --context functional-870347 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-870347 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870347 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0923 10:40:37.370422   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-870347 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.039098566s)
functional_test.go:761: restart took 34.039214937s for "functional-870347" cluster.
I0923 10:40:50.033576   11139 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-870347 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 logs: (1.355600264s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 logs --file /tmp/TestFunctionalserialLogsFileCmd563189236/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 logs --file /tmp/TestFunctionalserialLogsFileCmd563189236/001/logs.txt: (1.453229205s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-870347 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-870347
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-870347: exit status 115 (259.385865ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.190:30619 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-870347 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 config get cpus: exit status 14 (63.442144ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 config get cpus: exit status 14 (45.953157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-870347 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-870347 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20895: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870347 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-870347 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (159.108243ms)

                                                
                                                
-- stdout --
	* [functional-870347] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:40:58.797648   20704 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:40:58.798072   20704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:58.798097   20704 out.go:358] Setting ErrFile to fd 2...
	I0923 10:40:58.798108   20704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:58.798519   20704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:40:58.799359   20704 out.go:352] Setting JSON to false
	I0923 10:40:58.800843   20704 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1402,"bootTime":1727086657,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:40:58.800982   20704 start.go:139] virtualization: kvm guest
	I0923 10:40:58.802816   20704 out.go:177] * [functional-870347] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:40:58.804549   20704 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:40:58.804580   20704 notify.go:220] Checking for updates...
	I0923 10:40:58.806798   20704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:40:58.807909   20704 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:40:58.809268   20704 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:40:58.810502   20704 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:40:58.811886   20704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:40:58.813710   20704 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:40:58.814396   20704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:40:58.814471   20704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:40:58.838401   20704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42131
	I0923 10:40:58.838800   20704 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:40:58.839414   20704 main.go:141] libmachine: Using API Version  1
	I0923 10:40:58.839444   20704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:40:58.839995   20704 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:40:58.840192   20704 main.go:141] libmachine: (functional-870347) Calling .DriverName
	I0923 10:40:58.840437   20704 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:40:58.840820   20704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:40:58.840849   20704 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:40:58.859878   20704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35209
	I0923 10:40:58.860378   20704 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:40:58.860834   20704 main.go:141] libmachine: Using API Version  1
	I0923 10:40:58.860852   20704 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:40:58.861294   20704 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:40:58.861545   20704 main.go:141] libmachine: (functional-870347) Calling .DriverName
	I0923 10:40:58.898414   20704 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 10:40:58.899443   20704 start.go:297] selected driver: kvm2
	I0923 10:40:58.899471   20704 start.go:901] validating driver "kvm2" against &{Name:functional-870347 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-870347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:40:58.899581   20704 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:40:58.901451   20704 out.go:201] 
	W0923 10:40:58.902905   20704 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 10:40:58.903750   20704 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870347 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-870347 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-870347 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (149.469888ms)

                                                
                                                
-- stdout --
	* [functional-870347] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:40:58.647190   20666 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:40:58.647315   20666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:58.647325   20666 out.go:358] Setting ErrFile to fd 2...
	I0923 10:40:58.647330   20666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:58.647636   20666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 10:40:58.648162   20666 out.go:352] Setting JSON to false
	I0923 10:40:58.649080   20666 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1402,"bootTime":1727086657,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:40:58.649178   20666 start.go:139] virtualization: kvm guest
	I0923 10:40:58.651485   20666 out.go:177] * [functional-870347] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0923 10:40:58.653140   20666 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:40:58.653197   20666 notify.go:220] Checking for updates...
	I0923 10:40:58.655950   20666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:40:58.656882   20666 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 10:40:58.657979   20666 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 10:40:58.659312   20666 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:40:58.660650   20666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:40:58.662656   20666 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:40:58.663455   20666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:40:58.663528   20666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:40:58.679824   20666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0923 10:40:58.680262   20666 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:40:58.680755   20666 main.go:141] libmachine: Using API Version  1
	I0923 10:40:58.680774   20666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:40:58.681202   20666 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:40:58.681426   20666 main.go:141] libmachine: (functional-870347) Calling .DriverName
	I0923 10:40:58.681682   20666 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:40:58.681999   20666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 10:40:58.682050   20666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 10:40:58.697142   20666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38323
	I0923 10:40:58.697622   20666 main.go:141] libmachine: () Calling .GetVersion
	I0923 10:40:58.698189   20666 main.go:141] libmachine: Using API Version  1
	I0923 10:40:58.698224   20666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 10:40:58.698587   20666 main.go:141] libmachine: () Calling .GetMachineName
	I0923 10:40:58.698778   20666 main.go:141] libmachine: (functional-870347) Calling .DriverName
	I0923 10:40:58.736921   20666 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0923 10:40:58.738405   20666 start.go:297] selected driver: kvm2
	I0923 10:40:58.738432   20666 start.go:901] validating driver "kvm2" against &{Name:functional-870347 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-870347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:40:58.738557   20666 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:40:58.740969   20666 out.go:201] 
	W0923 10:40:58.742220   20666 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 10:40:58.743766   20666 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-870347 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-870347 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-gc8ht" [7086ffda-bfe3-4d93-afcf-4d51c80b1156] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-gc8ht" [7086ffda-bfe3-4d93-afcf-4d51c80b1156] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004339656s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.190:30915
functional_test.go:1675: http://192.168.39.190:30915: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-gc8ht

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.190:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.190:30915
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b54319a0-aafe-4e68-addd-71b31e5ccde6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004115812s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-870347 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-870347 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-870347 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-870347 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8a77152c-462c-4646-bfae-4d6a25bc7b87] Pending
helpers_test.go:344: "sp-pod" [8a77152c-462c-4646-bfae-4d6a25bc7b87] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8a77152c-462c-4646-bfae-4d6a25bc7b87] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.003367292s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-870347 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-870347 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-870347 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5328749a-cfc1-4dda-8d9a-ced26ee5c083] Pending
helpers_test.go:344: "sp-pod" [5328749a-cfc1-4dda-8d9a-ced26ee5c083] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5328749a-cfc1-4dda-8d9a-ced26ee5c083] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004268535s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-870347 exec sp-pod -- ls /tmp/mount
E0923 10:41:59.292633   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:15.431522   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:44:43.134100   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:49:15.431315   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh -n functional-870347 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cp functional-870347:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3289134637/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh -n functional-870347 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh -n functional-870347 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11139/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo cat /etc/test/nested/copy/11139/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11139.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo cat /etc/ssl/certs/11139.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11139.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo cat /usr/share/ca-certificates/11139.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/111392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo cat /etc/ssl/certs/111392.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/111392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo cat /usr/share/ca-certificates/111392.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-870347 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 ssh "sudo systemctl is-active docker": exit status 1 (189.274468ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 ssh "sudo systemctl is-active containerd": exit status 1 (186.215345ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-870347 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-870347 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-bnrp8" [657df601-1bb6-4cc0-8e2f-bab433678183] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-bnrp8" [657df601-1bb6-4cc0-8e2f-bab433678183] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003592424s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdany-port2378633038/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727088057559121634" to /tmp/TestFunctionalparallelMountCmdany-port2378633038/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727088057559121634" to /tmp/TestFunctionalparallelMountCmdany-port2378633038/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727088057559121634" to /tmp/TestFunctionalparallelMountCmdany-port2378633038/001/test-1727088057559121634
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.596896ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:40:57.773960   11139 retry.go:31] will retry after 463.268396ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 10:40 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 10:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 10:40 test-1727088057559121634
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh cat /mount-9p/test-1727088057559121634
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-870347 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c674d487-4763-48bc-aac4-b820df86baed] Pending
helpers_test.go:344: "busybox-mount" [c674d487-4763-48bc-aac4-b820df86baed] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c674d487-4763-48bc-aac4-b820df86baed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c674d487-4763-48bc-aac4-b820df86baed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003060229s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-870347 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdany-port2378633038/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "363.486402ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.912722ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "317.467431ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.266598ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdspecific-port1404503847/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.263513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:41:09.281649   11139 retry.go:31] will retry after 591.825301ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdspecific-port1404503847/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 ssh "sudo umount -f /mount-9p": exit status 1 (177.92126ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-870347 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdspecific-port1404503847/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 service list -o json
functional_test.go:1494: Took "492.668123ms" to run "out/minikube-linux-amd64 -p functional-870347 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.190:30226
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3264834792/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3264834792/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3264834792/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T" /mount1: exit status 1 (287.027829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:41:11.284112   11139 retry.go:31] will retry after 524.427401ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-870347 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3264834792/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3264834792/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-870347 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3264834792/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.190:30226
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870347 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-870347
localhost/kicbase/echo-server:functional-870347
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870347 image ls --format short --alsologtostderr:
I0923 10:41:26.091431   22561 out.go:345] Setting OutFile to fd 1 ...
I0923 10:41:26.091524   22561 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.091531   22561 out.go:358] Setting ErrFile to fd 2...
I0923 10:41:26.091535   22561 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.091719   22561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
I0923 10:41:26.092214   22561 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.092308   22561 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.092678   22561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.092736   22561 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.107193   22561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45367
I0923 10:41:26.107696   22561 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.108280   22561 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.108306   22561 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.108694   22561 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.108895   22561 main.go:141] libmachine: (functional-870347) Calling .GetState
I0923 10:41:26.110901   22561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.110934   22561 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.124906   22561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
I0923 10:41:26.125307   22561 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.125776   22561 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.125814   22561 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.126108   22561 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.126253   22561 main.go:141] libmachine: (functional-870347) Calling .DriverName
I0923 10:41:26.126445   22561 ssh_runner.go:195] Run: systemctl --version
I0923 10:41:26.126482   22561 main.go:141] libmachine: (functional-870347) Calling .GetSSHHostname
I0923 10:41:26.129871   22561 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.130313   22561 main.go:141] libmachine: (functional-870347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:3e:46", ip: ""} in network mk-functional-870347: {Iface:virbr1 ExpiryTime:2024-09-23 11:38:46 +0000 UTC Type:0 Mac:52:54:00:8a:3e:46 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-870347 Clientid:01:52:54:00:8a:3e:46}
I0923 10:41:26.130377   22561 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined IP address 192.168.39.190 and MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.131112   22561 main.go:141] libmachine: (functional-870347) Calling .GetSSHPort
I0923 10:41:26.131242   22561 main.go:141] libmachine: (functional-870347) Calling .GetSSHKeyPath
I0923 10:41:26.131387   22561 main.go:141] libmachine: (functional-870347) Calling .GetSSHUsername
I0923 10:41:26.131494   22561 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/functional-870347/id_rsa Username:docker}
I0923 10:41:26.209127   22561 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 10:41:26.255698   22561 main.go:141] libmachine: Making call to close driver server
I0923 10:41:26.255714   22561 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:26.256002   22561 main.go:141] libmachine: (functional-870347) DBG | Closing plugin on server side
I0923 10:41:26.256047   22561 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:26.256068   22561 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 10:41:26.256077   22561 main.go:141] libmachine: Making call to close driver server
I0923 10:41:26.256084   22561 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:26.256313   22561 main.go:141] libmachine: (functional-870347) DBG | Closing plugin on server side
I0923 10:41:26.256335   22561 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:26.256357   22561 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870347 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-870347  | 4bd46bc7c93b2 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-870347  | 9056ab77afb8e | 4.94MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870347 image ls --format table --alsologtostderr:
I0923 10:41:26.527166   22669 out.go:345] Setting OutFile to fd 1 ...
I0923 10:41:26.527308   22669 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.527321   22669 out.go:358] Setting ErrFile to fd 2...
I0923 10:41:26.527327   22669 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.527587   22669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
I0923 10:41:26.528416   22669 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.528569   22669 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.529190   22669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.529243   22669 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.545611   22669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38265
I0923 10:41:26.546115   22669 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.546694   22669 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.546720   22669 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.547140   22669 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.547381   22669 main.go:141] libmachine: (functional-870347) Calling .GetState
I0923 10:41:26.549237   22669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.549331   22669 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.564285   22669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
I0923 10:41:26.564743   22669 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.565205   22669 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.565228   22669 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.565644   22669 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.565838   22669 main.go:141] libmachine: (functional-870347) Calling .DriverName
I0923 10:41:26.566053   22669 ssh_runner.go:195] Run: systemctl --version
I0923 10:41:26.566083   22669 main.go:141] libmachine: (functional-870347) Calling .GetSSHHostname
I0923 10:41:26.569293   22669 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.569797   22669 main.go:141] libmachine: (functional-870347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:3e:46", ip: ""} in network mk-functional-870347: {Iface:virbr1 ExpiryTime:2024-09-23 11:38:46 +0000 UTC Type:0 Mac:52:54:00:8a:3e:46 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-870347 Clientid:01:52:54:00:8a:3e:46}
I0923 10:41:26.569824   22669 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined IP address 192.168.39.190 and MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.569969   22669 main.go:141] libmachine: (functional-870347) Calling .GetSSHPort
I0923 10:41:26.570123   22669 main.go:141] libmachine: (functional-870347) Calling .GetSSHKeyPath
I0923 10:41:26.570305   22669 main.go:141] libmachine: (functional-870347) Calling .GetSSHUsername
I0923 10:41:26.570442   22669 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/functional-870347/id_rsa Username:docker}
I0923 10:41:26.649860   22669 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 10:41:26.700901   22669 main.go:141] libmachine: Making call to close driver server
I0923 10:41:26.700927   22669 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:26.701208   22669 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:26.701223   22669 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 10:41:26.701238   22669 main.go:141] libmachine: (functional-870347) DBG | Closing plugin on server side
I0923 10:41:26.701243   22669 main.go:141] libmachine: Making call to close driver server
I0923 10:41:26.701250   22669 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:26.701511   22669 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:26.701530   22669 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870347 image ls --format json --alsologtostderr:
[{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"re
poTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"
},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k
8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dc
f7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-870347"],"size":"4943877"},{"id":"4bd46bc7c93b2e72d6a2d4a0692662e4da425ada7fc8e478c48988b97bd949f8","repoDigests":["localhost/minikube-local-cache-test@sha256:35a73c1ae9dbc436988295fe89dcc4e487933c085950cc63c29d9094d2470c37"],"repoTags":["localhost/minikube-local-cache-test:functional-870347"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:c
b9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870347 image ls --format json --alsologtostderr:
I0923 10:41:26.313758   22612 out.go:345] Setting OutFile to fd 1 ...
I0923 10:41:26.313844   22612 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.313852   22612 out.go:358] Setting ErrFile to fd 2...
I0923 10:41:26.313856   22612 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.314034   22612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
I0923 10:41:26.314591   22612 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.314682   22612 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.315016   22612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.315050   22612 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.329771   22612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
I0923 10:41:26.330157   22612 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.330741   22612 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.330761   22612 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.331137   22612 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.331359   22612 main.go:141] libmachine: (functional-870347) Calling .GetState
I0923 10:41:26.333152   22612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.333203   22612 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.347902   22612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33045
I0923 10:41:26.348294   22612 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.348743   22612 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.348761   22612 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.349067   22612 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.349240   22612 main.go:141] libmachine: (functional-870347) Calling .DriverName
I0923 10:41:26.349418   22612 ssh_runner.go:195] Run: systemctl --version
I0923 10:41:26.349449   22612 main.go:141] libmachine: (functional-870347) Calling .GetSSHHostname
I0923 10:41:26.352316   22612 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.352667   22612 main.go:141] libmachine: (functional-870347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:3e:46", ip: ""} in network mk-functional-870347: {Iface:virbr1 ExpiryTime:2024-09-23 11:38:46 +0000 UTC Type:0 Mac:52:54:00:8a:3e:46 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-870347 Clientid:01:52:54:00:8a:3e:46}
I0923 10:41:26.352697   22612 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined IP address 192.168.39.190 and MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.352854   22612 main.go:141] libmachine: (functional-870347) Calling .GetSSHPort
I0923 10:41:26.353470   22612 main.go:141] libmachine: (functional-870347) Calling .GetSSHKeyPath
I0923 10:41:26.353640   22612 main.go:141] libmachine: (functional-870347) Calling .GetSSHUsername
I0923 10:41:26.353817   22612 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/functional-870347/id_rsa Username:docker}
I0923 10:41:26.432013   22612 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 10:41:26.472272   22612 main.go:141] libmachine: Making call to close driver server
I0923 10:41:26.472286   22612 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:26.472536   22612 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:26.472555   22612 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 10:41:26.472564   22612 main.go:141] libmachine: (functional-870347) DBG | Closing plugin on server side
I0923 10:41:26.472570   22612 main.go:141] libmachine: Making call to close driver server
I0923 10:41:26.472578   22612 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:26.472802   22612 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:26.472814   22612 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870347 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 4bd46bc7c93b2e72d6a2d4a0692662e4da425ada7fc8e478c48988b97bd949f8
repoDigests:
- localhost/minikube-local-cache-test@sha256:35a73c1ae9dbc436988295fe89dcc4e487933c085950cc63c29d9094d2470c37
repoTags:
- localhost/minikube-local-cache-test:functional-870347
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-870347
size: "4943877"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870347 image ls --format yaml --alsologtostderr:
I0923 10:41:26.092039   22560 out.go:345] Setting OutFile to fd 1 ...
I0923 10:41:26.092272   22560 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.092283   22560 out.go:358] Setting ErrFile to fd 2...
I0923 10:41:26.092287   22560 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.092472   22560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
I0923 10:41:26.093118   22560 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.093226   22560 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.093651   22560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.093713   22560 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.107890   22560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
I0923 10:41:26.108322   22560 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.108864   22560 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.108896   22560 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.109187   22560 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.109363   22560 main.go:141] libmachine: (functional-870347) Calling .GetState
I0923 10:41:26.111215   22560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.111259   22560 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.124906   22560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34869
I0923 10:41:26.125357   22560 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.125806   22560 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.125835   22560 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.126121   22560 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.126296   22560 main.go:141] libmachine: (functional-870347) Calling .DriverName
I0923 10:41:26.126470   22560 ssh_runner.go:195] Run: systemctl --version
I0923 10:41:26.126487   22560 main.go:141] libmachine: (functional-870347) Calling .GetSSHHostname
I0923 10:41:26.129367   22560 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.129726   22560 main.go:141] libmachine: (functional-870347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:3e:46", ip: ""} in network mk-functional-870347: {Iface:virbr1 ExpiryTime:2024-09-23 11:38:46 +0000 UTC Type:0 Mac:52:54:00:8a:3e:46 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-870347 Clientid:01:52:54:00:8a:3e:46}
I0923 10:41:26.129767   22560 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined IP address 192.168.39.190 and MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.129874   22560 main.go:141] libmachine: (functional-870347) Calling .GetSSHPort
I0923 10:41:26.130071   22560 main.go:141] libmachine: (functional-870347) Calling .GetSSHKeyPath
I0923 10:41:26.130245   22560 main.go:141] libmachine: (functional-870347) Calling .GetSSHUsername
I0923 10:41:26.130373   22560 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/functional-870347/id_rsa Username:docker}
I0923 10:41:26.208755   22560 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 10:41:26.262176   22560 main.go:141] libmachine: Making call to close driver server
I0923 10:41:26.262189   22560 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:26.262413   22560 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:26.262431   22560 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 10:41:26.262440   22560 main.go:141] libmachine: Making call to close driver server
I0923 10:41:26.262448   22560 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:26.262462   22560 main.go:141] libmachine: (functional-870347) DBG | Closing plugin on server side
I0923 10:41:26.262676   22560 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:26.262688   22560 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-870347 ssh pgrep buildkitd: exit status 1 (193.098437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image build -t localhost/my-image:functional-870347 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 image build -t localhost/my-image:functional-870347 testdata/build --alsologtostderr: (3.953786018s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-870347 image build -t localhost/my-image:functional-870347 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7a0a357f7f7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-870347
--> 725a976d9e2
Successfully tagged localhost/my-image:functional-870347
725a976d9e29630dfa2ee63278cea5b24c6db74cc9cfc098a24b33a3077c57cd
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-870347 image build -t localhost/my-image:functional-870347 testdata/build --alsologtostderr:
I0923 10:41:26.504858   22658 out.go:345] Setting OutFile to fd 1 ...
I0923 10:41:26.505014   22658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.505024   22658 out.go:358] Setting ErrFile to fd 2...
I0923 10:41:26.505029   22658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:26.505223   22658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
I0923 10:41:26.505894   22658 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.506398   22658 config.go:182] Loaded profile config "functional-870347": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:26.506761   22658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.506825   22658 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.524008   22658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
I0923 10:41:26.524560   22658 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.525191   22658 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.525209   22658 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.525620   22658 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.525888   22658 main.go:141] libmachine: (functional-870347) Calling .GetState
I0923 10:41:26.528198   22658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 10:41:26.528243   22658 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 10:41:26.543707   22658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
I0923 10:41:26.544300   22658 main.go:141] libmachine: () Calling .GetVersion
I0923 10:41:26.544870   22658 main.go:141] libmachine: Using API Version  1
I0923 10:41:26.544896   22658 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 10:41:26.545219   22658 main.go:141] libmachine: () Calling .GetMachineName
I0923 10:41:26.545558   22658 main.go:141] libmachine: (functional-870347) Calling .DriverName
I0923 10:41:26.545791   22658 ssh_runner.go:195] Run: systemctl --version
I0923 10:41:26.545818   22658 main.go:141] libmachine: (functional-870347) Calling .GetSSHHostname
I0923 10:41:26.548869   22658 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.549252   22658 main.go:141] libmachine: (functional-870347) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:3e:46", ip: ""} in network mk-functional-870347: {Iface:virbr1 ExpiryTime:2024-09-23 11:38:46 +0000 UTC Type:0 Mac:52:54:00:8a:3e:46 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-870347 Clientid:01:52:54:00:8a:3e:46}
I0923 10:41:26.549270   22658 main.go:141] libmachine: (functional-870347) DBG | domain functional-870347 has defined IP address 192.168.39.190 and MAC address 52:54:00:8a:3e:46 in network mk-functional-870347
I0923 10:41:26.549430   22658 main.go:141] libmachine: (functional-870347) Calling .GetSSHPort
I0923 10:41:26.549591   22658 main.go:141] libmachine: (functional-870347) Calling .GetSSHKeyPath
I0923 10:41:26.549730   22658 main.go:141] libmachine: (functional-870347) Calling .GetSSHUsername
I0923 10:41:26.549881   22658 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/functional-870347/id_rsa Username:docker}
I0923 10:41:26.628009   22658 build_images.go:161] Building image from path: /tmp/build.3399278746.tar
I0923 10:41:26.628086   22658 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 10:41:26.638135   22658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3399278746.tar
I0923 10:41:26.642733   22658 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3399278746.tar: stat -c "%s %y" /var/lib/minikube/build/build.3399278746.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3399278746.tar': No such file or directory
I0923 10:41:26.642764   22658 ssh_runner.go:362] scp /tmp/build.3399278746.tar --> /var/lib/minikube/build/build.3399278746.tar (3072 bytes)
I0923 10:41:26.670605   22658 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3399278746
I0923 10:41:26.683909   22658 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3399278746 -xf /var/lib/minikube/build/build.3399278746.tar
I0923 10:41:26.706279   22658 crio.go:315] Building image: /var/lib/minikube/build/build.3399278746
I0923 10:41:26.706346   22658 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-870347 /var/lib/minikube/build/build.3399278746 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0923 10:41:30.385931   22658 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-870347 /var/lib/minikube/build/build.3399278746 --cgroup-manager=cgroupfs: (3.679560202s)
I0923 10:41:30.386001   22658 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3399278746
I0923 10:41:30.396628   22658 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3399278746.tar
I0923 10:41:30.406030   22658 build_images.go:217] Built localhost/my-image:functional-870347 from /tmp/build.3399278746.tar
I0923 10:41:30.406068   22658 build_images.go:133] succeeded building to: functional-870347
I0923 10:41:30.406074   22658 build_images.go:134] failed building to: 
I0923 10:41:30.406099   22658 main.go:141] libmachine: Making call to close driver server
I0923 10:41:30.406115   22658 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:30.406393   22658 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:30.406408   22658 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 10:41:30.406416   22658 main.go:141] libmachine: Making call to close driver server
I0923 10:41:30.406436   22658 main.go:141] libmachine: (functional-870347) Calling .Close
I0923 10:41:30.406652   22658 main.go:141] libmachine: (functional-870347) DBG | Closing plugin on server side
I0923 10:41:30.406668   22658 main.go:141] libmachine: Successfully made call to close driver server
I0923 10:41:30.406678   22658 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.957203203s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-870347
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image load --daemon kicbase/echo-server:functional-870347 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 image load --daemon kicbase/echo-server:functional-870347 --alsologtostderr: (1.70234441s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image load --daemon kicbase/echo-server:functional-870347 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
2024/09/23 10:41:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-870347
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image load --daemon kicbase/echo-server:functional-870347 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image save kicbase/echo-server:functional-870347 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image rm kicbase/echo-server:functional-870347 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 image rm kicbase/echo-server:functional-870347 --alsologtostderr: (2.171621496s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-870347 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.286303954s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-870347
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-870347 image save --daemon kicbase/echo-server:functional-870347 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-870347
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-870347
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-870347
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-870347
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-790780 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0923 10:54:15.432022   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-790780 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.018770644s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-790780 -- rollout status deployment/busybox: (5.719328476s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-2f4vm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hdk9n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hmsb2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-2f4vm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hdk9n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hmsb2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-2f4vm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hdk9n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hmsb2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-2f4vm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-2f4vm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hdk9n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hdk9n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hmsb2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-790780 -- exec busybox-7dff88458-hmsb2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-790780 -v=7 --alsologtostderr
E0923 10:55:38.496240   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-790780 -v=7 --alsologtostderr: (58.599604055s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-790780 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp testdata/cp-test.txt ha-790780:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780:/home/docker/cp-test.txt ha-790780-m02:/home/docker/cp-test_ha-790780_ha-790780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m02 "sudo cat /home/docker/cp-test_ha-790780_ha-790780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780:/home/docker/cp-test.txt ha-790780-m03:/home/docker/cp-test_ha-790780_ha-790780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m03 "sudo cat /home/docker/cp-test_ha-790780_ha-790780-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780:/home/docker/cp-test.txt ha-790780-m04:/home/docker/cp-test_ha-790780_ha-790780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780 "sudo cat /home/docker/cp-test.txt"
E0923 10:55:57.440724   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:55:57.447128   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:55:57.458527   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:55:57.479913   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:55:57.521357   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m04 "sudo cat /home/docker/cp-test_ha-790780_ha-790780-m04.txt"
E0923 10:55:57.603372   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp testdata/cp-test.txt ha-790780-m02:/home/docker/cp-test.txt
E0923 10:55:57.765667   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m02 "sudo cat /home/docker/cp-test.txt"
E0923 10:55:58.088005   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m02:/home/docker/cp-test.txt ha-790780:/home/docker/cp-test_ha-790780-m02_ha-790780.txt
E0923 10:55:58.729830   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780 "sudo cat /home/docker/cp-test_ha-790780-m02_ha-790780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m02:/home/docker/cp-test.txt ha-790780-m03:/home/docker/cp-test_ha-790780-m02_ha-790780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m03 "sudo cat /home/docker/cp-test_ha-790780-m02_ha-790780-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m02:/home/docker/cp-test.txt ha-790780-m04:/home/docker/cp-test_ha-790780-m02_ha-790780-m04.txt
E0923 10:56:00.011520   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m04 "sudo cat /home/docker/cp-test_ha-790780-m02_ha-790780-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp testdata/cp-test.txt ha-790780-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt ha-790780:/home/docker/cp-test_ha-790780-m03_ha-790780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780 "sudo cat /home/docker/cp-test_ha-790780-m03_ha-790780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt ha-790780-m02:/home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m03 "sudo cat /home/docker/cp-test.txt"
E0923 10:56:02.573032   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m02 "sudo cat /home/docker/cp-test_ha-790780-m03_ha-790780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m03:/home/docker/cp-test.txt ha-790780-m04:/home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m04 "sudo cat /home/docker/cp-test_ha-790780-m03_ha-790780-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp testdata/cp-test.txt ha-790780-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile644830916/001/cp-test_ha-790780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt ha-790780:/home/docker/cp-test_ha-790780-m04_ha-790780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780 "sudo cat /home/docker/cp-test_ha-790780-m04_ha-790780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt ha-790780-m02:/home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m02 "sudo cat /home/docker/cp-test_ha-790780-m04_ha-790780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 cp ha-790780-m04:/home/docker/cp-test.txt ha-790780-m03:/home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 ssh -n ha-790780-m03 "sudo cat /home/docker/cp-test_ha-790780-m04_ha-790780-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0923 10:58:41.301038   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.022079421s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-790780 node delete m03 -v=7 --alsologtostderr: (15.887163035s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (320.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-790780 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0923 11:09:15.431595   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:10:57.440726   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:18.498356   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:12:20.504648   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-790780 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m19.41474808s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (320.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-790780 --control-plane -v=7 --alsologtostderr
E0923 11:14:15.430861   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-790780 --control-plane -v=7 --alsologtostderr: (1m18.582510736s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-790780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-998314 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0923 11:15:57.440603   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-998314 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.781357545s)
--- PASS: TestJSONOutput/start/Command (88.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-998314 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-998314 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-998314 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-998314 --output=json --user=testUser: (7.351505872s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-568852 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-568852 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.368815ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"48bd181f-c12c-4dc4-a608-cc61baeebcc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-568852] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"94a335e9-567b-47bb-8cff-c491b98c2f2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"fdc8fc98-a409-40f6-be5e-d237c3e76d0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"005a0961-21fd-4069-be75-da19925a44bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig"}}
	{"specversion":"1.0","id":"d179839c-c6f4-4a83-9c77-8264c5d70ade","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube"}}
	{"specversion":"1.0","id":"a0e42d23-1539-44d6-9c20-90845d4a8f62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"269d576e-4b8c-4498-9c75-e1630c063bb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"36cd6337-3bdf-4e13-a31c-cde789ca7e20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-568852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-568852
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-160028 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-160028 --driver=kvm2  --container-runtime=crio: (45.062746347s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-173475 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-173475 --driver=kvm2  --container-runtime=crio: (42.004206498s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-160028
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-173475
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-173475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-173475
helpers_test.go:175: Cleaning up "first-160028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-160028
--- PASS: TestMinikubeProfile (89.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-619378 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-619378 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.281992482s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-619378 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-619378 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-634505 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-634505 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.43587015s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634505 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634505 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-619378 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634505 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634505 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-634505
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-634505: (1.283801172s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-634505
E0923 11:19:15.431103   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-634505: (22.26664107s)
--- PASS: TestMountStart/serial/RestartStopped (23.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634505 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634505 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-399279 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0923 11:20:57.440772   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-399279 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.231533022s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-399279 -- rollout status deployment/busybox: (4.329245702s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-49q42 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-7b2xk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-49q42 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-7b2xk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-49q42 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-7b2xk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-49q42 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-49q42 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-7b2xk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-399279 -- exec busybox-7dff88458-7b2xk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-399279 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-399279 -v 3 --alsologtostderr: (54.983396843s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-399279 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp testdata/cp-test.txt multinode-399279:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2040024565/001/cp-test_multinode-399279.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279:/home/docker/cp-test.txt multinode-399279-m02:/home/docker/cp-test_multinode-399279_multinode-399279-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m02 "sudo cat /home/docker/cp-test_multinode-399279_multinode-399279-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279:/home/docker/cp-test.txt multinode-399279-m03:/home/docker/cp-test_multinode-399279_multinode-399279-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m03 "sudo cat /home/docker/cp-test_multinode-399279_multinode-399279-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp testdata/cp-test.txt multinode-399279-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2040024565/001/cp-test_multinode-399279-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279-m02:/home/docker/cp-test.txt multinode-399279:/home/docker/cp-test_multinode-399279-m02_multinode-399279.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279 "sudo cat /home/docker/cp-test_multinode-399279-m02_multinode-399279.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279-m02:/home/docker/cp-test.txt multinode-399279-m03:/home/docker/cp-test_multinode-399279-m02_multinode-399279-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m03 "sudo cat /home/docker/cp-test_multinode-399279-m02_multinode-399279-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp testdata/cp-test.txt multinode-399279-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2040024565/001/cp-test_multinode-399279-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt multinode-399279:/home/docker/cp-test_multinode-399279-m03_multinode-399279.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279 "sudo cat /home/docker/cp-test_multinode-399279-m03_multinode-399279.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 cp multinode-399279-m03:/home/docker/cp-test.txt multinode-399279-m02:/home/docker/cp-test_multinode-399279-m03_multinode-399279-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 ssh -n multinode-399279-m02 "sudo cat /home/docker/cp-test_multinode-399279-m03_multinode-399279-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-399279 node stop m03: (1.511957478s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-399279 status: exit status 7 (428.228363ms)

                                                
                                                
-- stdout --
	multinode-399279
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-399279-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-399279-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr: exit status 7 (412.100129ms)

                                                
                                                
-- stdout --
	multinode-399279
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-399279-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-399279-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:22:34.917802   42263 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:22:34.917911   42263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:22:34.917918   42263 out.go:358] Setting ErrFile to fd 2...
	I0923 11:22:34.917923   42263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:22:34.918093   42263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:22:34.918257   42263 out.go:352] Setting JSON to false
	I0923 11:22:34.918289   42263 mustload.go:65] Loading cluster: multinode-399279
	I0923 11:22:34.918387   42263 notify.go:220] Checking for updates...
	I0923 11:22:34.918653   42263 config.go:182] Loaded profile config "multinode-399279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:22:34.918674   42263 status.go:174] checking status of multinode-399279 ...
	I0923 11:22:34.919161   42263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:22:34.919221   42263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:22:34.938244   42263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38665
	I0923 11:22:34.938756   42263 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:22:34.939441   42263 main.go:141] libmachine: Using API Version  1
	I0923 11:22:34.939472   42263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:22:34.939766   42263 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:22:34.939929   42263 main.go:141] libmachine: (multinode-399279) Calling .GetState
	I0923 11:22:34.941660   42263 status.go:364] multinode-399279 host status = "Running" (err=<nil>)
	I0923 11:22:34.941677   42263 host.go:66] Checking if "multinode-399279" exists ...
	I0923 11:22:34.941959   42263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:22:34.941990   42263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:22:34.956914   42263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36767
	I0923 11:22:34.957357   42263 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:22:34.957857   42263 main.go:141] libmachine: Using API Version  1
	I0923 11:22:34.957881   42263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:22:34.958186   42263 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:22:34.958352   42263 main.go:141] libmachine: (multinode-399279) Calling .GetIP
	I0923 11:22:34.961328   42263 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:22:34.961870   42263 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:22:34.961901   42263 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:22:34.962000   42263 host.go:66] Checking if "multinode-399279" exists ...
	I0923 11:22:34.962338   42263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:22:34.962378   42263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:22:34.978291   42263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0923 11:22:34.978814   42263 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:22:34.979271   42263 main.go:141] libmachine: Using API Version  1
	I0923 11:22:34.979291   42263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:22:34.979578   42263 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:22:34.979771   42263 main.go:141] libmachine: (multinode-399279) Calling .DriverName
	I0923 11:22:34.979945   42263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:22:34.979982   42263 main.go:141] libmachine: (multinode-399279) Calling .GetSSHHostname
	I0923 11:22:34.982744   42263 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:22:34.983169   42263 main.go:141] libmachine: (multinode-399279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:d1:f5", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:19:47 +0000 UTC Type:0 Mac:52:54:00:6b:d1:f5 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-399279 Clientid:01:52:54:00:6b:d1:f5}
	I0923 11:22:34.983205   42263 main.go:141] libmachine: (multinode-399279) DBG | domain multinode-399279 has defined IP address 192.168.39.71 and MAC address 52:54:00:6b:d1:f5 in network mk-multinode-399279
	I0923 11:22:34.983345   42263 main.go:141] libmachine: (multinode-399279) Calling .GetSSHPort
	I0923 11:22:34.983525   42263 main.go:141] libmachine: (multinode-399279) Calling .GetSSHKeyPath
	I0923 11:22:34.983667   42263 main.go:141] libmachine: (multinode-399279) Calling .GetSSHUsername
	I0923 11:22:34.983797   42263 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279/id_rsa Username:docker}
	I0923 11:22:35.064852   42263 ssh_runner.go:195] Run: systemctl --version
	I0923 11:22:35.071145   42263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:22:35.087002   42263 kubeconfig.go:125] found "multinode-399279" server: "https://192.168.39.71:8443"
	I0923 11:22:35.087038   42263 api_server.go:166] Checking apiserver status ...
	I0923 11:22:35.087081   42263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:22:35.101124   42263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1063/cgroup
	W0923 11:22:35.111278   42263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1063/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 11:22:35.111331   42263 ssh_runner.go:195] Run: ls
	I0923 11:22:35.116230   42263 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0923 11:22:35.120308   42263 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I0923 11:22:35.120328   42263 status.go:456] multinode-399279 apiserver status = Running (err=<nil>)
	I0923 11:22:35.120340   42263 status.go:176] multinode-399279 status: &{Name:multinode-399279 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:22:35.120363   42263 status.go:174] checking status of multinode-399279-m02 ...
	I0923 11:22:35.120760   42263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:22:35.120803   42263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:22:35.135874   42263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0923 11:22:35.136378   42263 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:22:35.136813   42263 main.go:141] libmachine: Using API Version  1
	I0923 11:22:35.136834   42263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:22:35.137132   42263 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:22:35.137310   42263 main.go:141] libmachine: (multinode-399279-m02) Calling .GetState
	I0923 11:22:35.138704   42263 status.go:364] multinode-399279-m02 host status = "Running" (err=<nil>)
	I0923 11:22:35.138718   42263 host.go:66] Checking if "multinode-399279-m02" exists ...
	I0923 11:22:35.138983   42263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:22:35.139017   42263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:22:35.153466   42263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0923 11:22:35.153893   42263 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:22:35.154284   42263 main.go:141] libmachine: Using API Version  1
	I0923 11:22:35.154302   42263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:22:35.154557   42263 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:22:35.154740   42263 main.go:141] libmachine: (multinode-399279-m02) Calling .GetIP
	I0923 11:22:35.157497   42263 main.go:141] libmachine: (multinode-399279-m02) DBG | domain multinode-399279-m02 has defined MAC address 52:54:00:5e:b4:c2 in network mk-multinode-399279
	I0923 11:22:35.157923   42263 main.go:141] libmachine: (multinode-399279-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:b4:c2", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:20:46 +0000 UTC Type:0 Mac:52:54:00:5e:b4:c2 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-399279-m02 Clientid:01:52:54:00:5e:b4:c2}
	I0923 11:22:35.157953   42263 main.go:141] libmachine: (multinode-399279-m02) DBG | domain multinode-399279-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:b4:c2 in network mk-multinode-399279
	I0923 11:22:35.158086   42263 host.go:66] Checking if "multinode-399279-m02" exists ...
	I0923 11:22:35.158376   42263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:22:35.158409   42263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:22:35.173647   42263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45657
	I0923 11:22:35.174128   42263 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:22:35.174602   42263 main.go:141] libmachine: Using API Version  1
	I0923 11:22:35.174620   42263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:22:35.174915   42263 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:22:35.175072   42263 main.go:141] libmachine: (multinode-399279-m02) Calling .DriverName
	I0923 11:22:35.175239   42263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:22:35.175260   42263 main.go:141] libmachine: (multinode-399279-m02) Calling .GetSSHHostname
	I0923 11:22:35.177639   42263 main.go:141] libmachine: (multinode-399279-m02) DBG | domain multinode-399279-m02 has defined MAC address 52:54:00:5e:b4:c2 in network mk-multinode-399279
	I0923 11:22:35.177988   42263 main.go:141] libmachine: (multinode-399279-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:b4:c2", ip: ""} in network mk-multinode-399279: {Iface:virbr1 ExpiryTime:2024-09-23 12:20:46 +0000 UTC Type:0 Mac:52:54:00:5e:b4:c2 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-399279-m02 Clientid:01:52:54:00:5e:b4:c2}
	I0923 11:22:35.178011   42263 main.go:141] libmachine: (multinode-399279-m02) DBG | domain multinode-399279-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:b4:c2 in network mk-multinode-399279
	I0923 11:22:35.178158   42263 main.go:141] libmachine: (multinode-399279-m02) Calling .GetSSHPort
	I0923 11:22:35.178305   42263 main.go:141] libmachine: (multinode-399279-m02) Calling .GetSSHKeyPath
	I0923 11:22:35.178429   42263 main.go:141] libmachine: (multinode-399279-m02) Calling .GetSSHUsername
	I0923 11:22:35.178562   42263 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19689-3961/.minikube/machines/multinode-399279-m02/id_rsa Username:docker}
	I0923 11:22:35.256554   42263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:22:35.269821   42263 status.go:176] multinode-399279-m02 status: &{Name:multinode-399279-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:22:35.269852   42263 status.go:174] checking status of multinode-399279-m03 ...
	I0923 11:22:35.270166   42263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 11:22:35.270210   42263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:22:35.285321   42263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0923 11:22:35.285832   42263 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:22:35.286294   42263 main.go:141] libmachine: Using API Version  1
	I0923 11:22:35.286321   42263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:22:35.286662   42263 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:22:35.286802   42263 main.go:141] libmachine: (multinode-399279-m03) Calling .GetState
	I0923 11:22:35.288273   42263 status.go:364] multinode-399279-m03 host status = "Stopped" (err=<nil>)
	I0923 11:22:35.288290   42263 status.go:377] host is not running, skipping remaining checks
	I0923 11:22:35.288296   42263 status.go:176] multinode-399279-m03 status: &{Name:multinode-399279-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-399279 node start m03 -v=7 --alsologtostderr: (39.797283732s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-399279 node delete m03: (1.782964449s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (185.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-399279 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-399279 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m4.761539553s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-399279 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (185.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-399279
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-399279-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-399279-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.781721ms)

                                                
                                                
-- stdout --
	* [multinode-399279-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-399279-m02' is duplicated with machine name 'multinode-399279-m02' in profile 'multinode-399279'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-399279-m03 --driver=kvm2  --container-runtime=crio
E0923 11:34:15.430934   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-399279-m03 --driver=kvm2  --container-runtime=crio: (42.612446699s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-399279
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-399279: exit status 80 (208.425859ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-399279 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-399279-m03 already exists in multinode-399279-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-399279-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.86s)

                                                
                                    
x
+
TestScheduledStopUnix (115.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-824006 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-824006 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.53630314s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824006 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-824006 -n scheduled-stop-824006
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824006 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 11:38:40.089956   11139 retry.go:31] will retry after 112.42µs: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.091112   11139 retry.go:31] will retry after 157.873µs: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.092252   11139 retry.go:31] will retry after 232.124µs: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.093410   11139 retry.go:31] will retry after 320.919µs: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.094544   11139 retry.go:31] will retry after 495.313µs: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.095672   11139 retry.go:31] will retry after 535.918µs: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.096803   11139 retry.go:31] will retry after 856.79µs: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.097921   11139 retry.go:31] will retry after 987.921µs: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.099043   11139 retry.go:31] will retry after 3.359665ms: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.103235   11139 retry.go:31] will retry after 3.871749ms: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.107464   11139 retry.go:31] will retry after 7.77438ms: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.115687   11139 retry.go:31] will retry after 6.534153ms: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.122911   11139 retry.go:31] will retry after 15.3773ms: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.139147   11139 retry.go:31] will retry after 12.976385ms: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
I0923 11:38:40.152493   11139 retry.go:31] will retry after 15.451414ms: open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/scheduled-stop-824006/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824006 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824006 -n scheduled-stop-824006
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-824006
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-824006 --schedule 15s
E0923 11:39:15.431519   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-824006
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-824006: exit status 7 (64.327372ms)

                                                
                                                
-- stdout --
	scheduled-stop-824006
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824006 -n scheduled-stop-824006
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-824006 -n scheduled-stop-824006: exit status 7 (64.116887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-824006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-824006
--- PASS: TestScheduledStopUnix (115.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (218.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.463352939 start -p running-upgrade-496732 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0923 11:40:57.440473   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/functional-870347/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.463352939 start -p running-upgrade-496732 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m55.517514208s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-496732 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-496732 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.298871195s)
helpers_test.go:175: Cleaning up "running-upgrade-496732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-496732
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-496732: (1.118783313s)
--- PASS: TestRunningBinaryUpgrade (218.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (144.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2380126809 start -p stopped-upgrade-155052 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2380126809 start -p stopped-upgrade-155052 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m38.528795531s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2380126809 -p stopped-upgrade-155052 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2380126809 -p stopped-upgrade-155052 stop: (1.395488603s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-155052 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-155052 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.797974074s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (144.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-283725 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-283725 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (96.579302ms)

                                                
                                                
-- stdout --
	* [false-283725] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:39:54.244049   49983 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:39:54.244178   49983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:39:54.244190   49983 out.go:358] Setting ErrFile to fd 2...
	I0923 11:39:54.244196   49983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:39:54.244371   49983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3961/.minikube/bin
	I0923 11:39:54.244919   49983 out.go:352] Setting JSON to false
	I0923 11:39:54.245812   49983 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4937,"bootTime":1727086657,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:39:54.245903   49983 start.go:139] virtualization: kvm guest
	I0923 11:39:54.247940   49983 out.go:177] * [false-283725] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:39:54.249261   49983 notify.go:220] Checking for updates...
	I0923 11:39:54.249282   49983 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:39:54.250922   49983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:39:54.252446   49983 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	I0923 11:39:54.253693   49983 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	I0923 11:39:54.254780   49983 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:39:54.255904   49983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:39:54.257696   49983 config.go:182] Loaded profile config "kubernetes-upgrade-193704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0923 11:39:54.257796   49983 config.go:182] Loaded profile config "offline-crio-147533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:39:54.257880   49983 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:39:54.290819   49983 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 11:39:54.292212   49983 start.go:297] selected driver: kvm2
	I0923 11:39:54.292230   49983 start.go:901] validating driver "kvm2" against <nil>
	I0923 11:39:54.292247   49983 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:39:54.294375   49983 out.go:201] 
	W0923 11:39:54.295720   49983 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0923 11:39:54.297060   49983 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-283725 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-283725" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-283725

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283725"

                                                
                                                
----------------------- debugLogs end: false-283725 [took: 2.662929294s] --------------------------------
helpers_test.go:175: Cleaning up "false-283725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-283725
--- PASS: TestNetworkPlugins/group/false (2.90s)

                                                
                                    
x
+
TestPause/serial/Start (65.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-605245 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-605245 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m5.050804564s)
--- PASS: TestPause/serial/Start (65.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-155052
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-717494 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-717494 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.491378ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-717494] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3961/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3961/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-717494 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-717494 --driver=kvm2  --container-runtime=crio: (45.834152901s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-717494 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-605245 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-605245 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.999342378s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-717494 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-717494 --no-kubernetes --driver=kvm2  --container-runtime=crio: (4.99799086s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-717494 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-717494 status -o json: exit status 2 (228.088187ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-717494","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-717494
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-717494 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-717494 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.946621925s)
--- PASS: TestNoKubernetes/serial/Start (28.95s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-605245 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-605245 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-605245 --output=json --layout=cluster: exit status 2 (241.581013ms)

                                                
                                                
-- stdout --
	{"Name":"pause-605245","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-605245","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-605245 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-605245 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-605245 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-717494 "sudo systemctl is-active --quiet service kubelet"
I0923 11:43:41.966818   11139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 11:43:41.966909   11139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0923 11:43:42.003065   11139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0923 11:43:42.003105   11139 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0923 11:43:42.003174   11139 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 11:43:42.003209   11139 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2262457540/002/docker-machine-driver-kvm2
I0923 11:43:42.059422   11139 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2262457540/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc0003ad0a0 gz:0xc0003ad0a8 tar:0xc0003ad050 tar.bz2:0xc0003ad060 tar.gz:0xc0003ad070 tar.xz:0xc0003ad080 tar.zst:0xc0003ad090 tbz2:0xc0003ad060 tgz:0xc0003ad070 txz:0xc0003ad080 tzst:0xc0003ad090 xz:0xc0003ad100 zip:0xc0003ad930 zst:0xc0003ad108] Getters:map[file:0xc000a6d5a0 http:0xc0005604b0 https:0xc000560500] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 11:43:42.059465   11139 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2262457540/002/docker-machine-driver-kvm2
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-717494 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.399293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-717494
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-717494: (1.338090106s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (67.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-717494 --driver=kvm2  --container-runtime=crio
E0923 11:44:15.431607   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-717494 --driver=kvm2  --container-runtime=crio: (1m7.353342702s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (67.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-717494 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-717494 "sudo systemctl is-active --quiet service kubelet": exit status 1 (183.485926ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (158.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m38.723394807s)
--- PASS: TestNetworkPlugins/group/auto/Start (158.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m28.045945903s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (59.943865683s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-283725 "pgrep -a kubelet"
I0923 11:47:31.704736   11139 config.go:182] Loaded profile config "auto-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-283725 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w8kf6" [e08e457b-31d0-4f4d-be35-cbb462e0e46b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-w8kf6" [e08e457b-31d0-4f4d-be35-cbb462e0e46b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.004485173s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-283725 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zq98x" [7402fdd3-60af-4685-84b4-34ceb8691a96] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004239562s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-283725 "pgrep -a kubelet"
I0923 11:47:57.670549   11139 config.go:182] Loaded profile config "flannel-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-283725 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kx9sw" [d590d35c-2d75-4776-8778-1a1abc057c2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kx9sw" [d590d35c-2d75-4776-8778-1a1abc057c2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005260637s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-283725 "pgrep -a kubelet"
I0923 11:47:58.589470   11139 config.go:182] Loaded profile config "enable-default-cni-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-283725 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t5kg2" [ff75dfae-8ad1-4e3c-8a45-abf3e2240d53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t5kg2" [ff75dfae-8ad1-4e3c-8a45-abf3e2240d53] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003598699s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (56.50184717s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-283725 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (16.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-283725 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-283725 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16031905s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:48:26.016698   11139 retry.go:31] will retry after 786.176622ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-283725 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (16.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m25.320285163s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m15.501738392s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-283725 "pgrep -a kubelet"
I0923 11:48:58.800936   11139 config.go:182] Loaded profile config "bridge-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-283725 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-srhwc" [085a3ee7-5cbe-4ba4-bd7d-9e056bae4324] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-srhwc" [085a3ee7-5cbe-4ba4-bd7d-9e056bae4324] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005526858s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-283725 exec deployment/netcat -- nslookup kubernetes.default
E0923 11:49:15.431288   11139 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3961/.minikube/profiles/addons-230451/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-283725 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149267338s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:49:26.198522   11139 retry.go:31] will retry after 915.035655ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-283725 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-283725 exec deployment/netcat -- nslookup kubernetes.default: (5.515393711s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-283725 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m13.491461226s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-srfmc" [923867fa-a6ec-4180-a81f-5671efa0e9b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004864629s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-283725 "pgrep -a kubelet"
I0923 11:49:56.431264   11139 config.go:182] Loaded profile config "calico-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-283725 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d59mk" [53fe21b2-8b01-4fd9-b071-6c48c52bcd2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d59mk" [53fe21b2-8b01-4fd9-b071-6c48c52bcd2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003997778s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cckpw" [9eae22f2-d4e7-4f45-94f9-333c89340939] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003611789s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-283725 "pgrep -a kubelet"
I0923 11:50:05.693960   11139 config.go:182] Loaded profile config "kindnet-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-283725 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6l8bt" [7aa0bff3-ec0e-4207-bf2c-7d71a205a41c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6l8bt" [7aa0bff3-ec0e-4207-bf2c-7d71a205a41c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.010286829s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-283725 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-283725 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-283725 "pgrep -a kubelet"
I0923 11:51:02.254170   11139 config.go:182] Loaded profile config "custom-flannel-283725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-283725 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cm4rg" [b8e95f8e-fee6-4c54-bbed-dbc38c13e3b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cm4rg" [b8e95f8e-fee6-4c54-bbed-dbc38c13e3b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004585924s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-283725 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-283725 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    

Test skip (36/275)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
252 TestNetworkPlugins/group/kubenet 2.88
261 TestNetworkPlugins/group/cilium 3.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-283725 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-283725" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-283725

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283725"

                                                
                                                
----------------------- debugLogs end: kubenet-283725 [took: 2.741976884s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-283725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-283725
--- SKIP: TestNetworkPlugins/group/kubenet (2.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-283725 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-283725" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-283725

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-283725" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283725"

                                                
                                                
----------------------- debugLogs end: cilium-283725 [took: 3.05113925s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-283725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-283725
--- SKIP: TestNetworkPlugins/group/cilium (3.19s)

                                                
                                    
Copied to clipboard